var/home/core/zuul-output/0000755000175000017500000000000015116026453014530 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015116031262015466 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000002122666015116031252017676 0ustar rootrootDec 09 14:11:56 crc systemd[1]: Starting Kubernetes Kubelet... Dec 09 14:11:57 crc kubenswrapper[5173]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:11:57 crc kubenswrapper[5173]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 09 14:11:57 crc kubenswrapper[5173]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:11:57 crc kubenswrapper[5173]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:11:57 crc kubenswrapper[5173]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 09 14:11:57 crc kubenswrapper[5173]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.441609 5173 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444429 5173 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444449 5173 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444454 5173 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444459 5173 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444464 5173 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444468 5173 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444472 5173 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444477 5173 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444482 5173 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444488 5173 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444493 5173 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444497 5173 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444501 5173 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444505 5173 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444508 5173 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444512 5173 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444515 5173 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444520 5173 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444524 5173 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444533 5173 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444536 5173 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444540 5173 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444543 5173 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444547 5173 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444550 5173 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444554 5173 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444557 5173 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444561 5173 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444564 5173 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444568 5173 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444571 5173 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444575 5173 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444578 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444582 5173 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444586 5173 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444590 5173 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444593 5173 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444597 5173 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444600 5173 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444604 5173 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444608 5173 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444613 5173 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444616 5173 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444620 5173 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444623 5173 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444626 5173 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444630 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444636 5173 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444642 5173 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444646 5173 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444650 5173 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444654 5173 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444658 5173 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444662 5173 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444667 5173 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444671 5173 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444675 5173 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444680 5173 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444685 5173 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444690 5173 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444693 5173 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444697 5173 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444701 5173 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444704 5173 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444708 5173 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444712 5173 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444717 5173 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444720 5173 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444724 5173 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444727 5173 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444731 5173 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444734 5173 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444738 5173 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444743 5173 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444747 5173 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444751 5173 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444754 5173 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444757 5173 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444761 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444764 5173 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444768 5173 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444771 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444775 5173 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444778 5173 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444781 5173 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.444785 5173 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445255 5173 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445261 5173 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445264 5173 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445268 5173 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445271 5173 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445275 5173 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445278 5173 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445281 5173 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445285 5173 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445288 5173 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445292 5173 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445295 5173 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445299 5173 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445302 5173 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445306 5173 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445309 5173 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445312 5173 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445316 5173 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445319 5173 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445324 5173 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445328 5173 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445331 5173 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445335 5173 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445339 5173 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445342 5173 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445346 5173 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445353 5173 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445367 5173 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445371 5173 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445374 5173 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445378 5173 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445383 5173 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445387 5173 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445391 5173 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445394 5173 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445398 5173 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445402 5173 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445405 5173 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445409 5173 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445412 5173 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445416 5173 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445419 5173 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445423 5173 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445426 5173 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445430 5173 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445434 5173 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445438 5173 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445441 5173 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445445 5173 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445448 5173 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445452 5173 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445460 5173 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445466 5173 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445470 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445473 5173 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445477 5173 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445480 5173 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445484 5173 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445487 5173 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445491 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445494 5173 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445497 5173 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445501 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445505 5173 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445508 5173 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445512 5173 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445515 5173 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445519 5173 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445522 5173 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445525 5173 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445529 5173 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445532 5173 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445536 5173 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445541 5173 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445544 5173 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445548 5173 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445551 5173 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445555 5173 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445558 5173 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445562 5173 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445565 5173 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445568 5173 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445572 5173 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445578 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445582 5173 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.445586 5173 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446140 5173 flags.go:64] FLAG: --address="0.0.0.0" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446153 5173 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446164 5173 flags.go:64] FLAG: --anonymous-auth="true" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446170 5173 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446176 5173 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446180 5173 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446185 5173 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446191 5173 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446195 5173 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446199 5173 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446204 5173 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446209 5173 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446213 5173 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446218 5173 flags.go:64] FLAG: --cgroup-root="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446222 5173 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446226 5173 flags.go:64] FLAG: --client-ca-file="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446231 5173 flags.go:64] FLAG: --cloud-config="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446235 5173 flags.go:64] FLAG: --cloud-provider="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446239 5173 flags.go:64] FLAG: --cluster-dns="[]" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446244 5173 flags.go:64] FLAG: --cluster-domain="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446248 5173 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446252 5173 flags.go:64] FLAG: --config-dir="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446255 5173 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446260 5173 flags.go:64] FLAG: --container-log-max-files="5" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446265 5173 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446270 5173 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446274 5173 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446278 5173 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446283 5173 flags.go:64] FLAG: --contention-profiling="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446295 5173 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446299 5173 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446303 5173 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446306 5173 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446312 5173 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446316 5173 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446320 5173 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446324 5173 flags.go:64] FLAG: --enable-load-reader="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446327 5173 flags.go:64] FLAG: --enable-server="true" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446331 5173 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446336 5173 flags.go:64] FLAG: --event-burst="100" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446340 5173 flags.go:64] FLAG: --event-qps="50" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446343 5173 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446347 5173 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446351 5173 flags.go:64] FLAG: --eviction-hard="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446371 5173 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446375 5173 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446381 5173 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446385 5173 flags.go:64] FLAG: --eviction-soft="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446390 5173 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446395 5173 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446400 5173 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446404 5173 flags.go:64] FLAG: --experimental-mounter-path="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446408 5173 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446413 5173 flags.go:64] FLAG: --fail-swap-on="true" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446418 5173 flags.go:64] FLAG: --feature-gates="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446424 5173 flags.go:64] FLAG: --file-check-frequency="20s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446429 5173 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446433 5173 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446438 5173 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446443 5173 flags.go:64] FLAG: --healthz-port="10248" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446447 5173 flags.go:64] FLAG: --help="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446454 5173 flags.go:64] FLAG: --hostname-override="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446459 5173 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446463 5173 flags.go:64] FLAG: --http-check-frequency="20s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446467 5173 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446472 5173 flags.go:64] FLAG: --image-credential-provider-config="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446476 5173 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446480 5173 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446485 5173 flags.go:64] FLAG: --image-service-endpoint="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446490 5173 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446495 5173 flags.go:64] FLAG: --kube-api-burst="100" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446499 5173 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446505 5173 flags.go:64] FLAG: --kube-api-qps="50" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446509 5173 flags.go:64] FLAG: --kube-reserved="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446514 5173 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446518 5173 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446523 5173 flags.go:64] FLAG: --kubelet-cgroups="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446528 5173 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446532 5173 flags.go:64] FLAG: --lock-file="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446537 5173 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446542 5173 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446547 5173 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446557 5173 flags.go:64] FLAG: --log-json-split-stream="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446561 5173 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446565 5173 flags.go:64] FLAG: --log-text-split-stream="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446569 5173 flags.go:64] FLAG: --logging-format="text" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446573 5173 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446577 5173 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446581 5173 flags.go:64] FLAG: --manifest-url="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446585 5173 flags.go:64] FLAG: --manifest-url-header="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446592 5173 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446597 5173 flags.go:64] FLAG: --max-open-files="1000000" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446603 5173 flags.go:64] FLAG: --max-pods="110" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446611 5173 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446616 5173 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446622 5173 flags.go:64] FLAG: --memory-manager-policy="None" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446626 5173 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446632 5173 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446637 5173 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446642 5173 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446655 5173 flags.go:64] FLAG: --node-status-max-images="50" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446660 5173 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446666 5173 flags.go:64] FLAG: --oom-score-adj="-999" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446672 5173 flags.go:64] FLAG: --pod-cidr="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446677 5173 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446687 5173 flags.go:64] FLAG: --pod-manifest-path="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446692 5173 flags.go:64] FLAG: --pod-max-pids="-1" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446697 5173 flags.go:64] FLAG: --pods-per-core="0" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446702 5173 flags.go:64] FLAG: --port="10250" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446707 5173 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446712 5173 flags.go:64] FLAG: --provider-id="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446718 5173 flags.go:64] FLAG: --qos-reserved="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446723 5173 flags.go:64] FLAG: --read-only-port="10255" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446729 5173 flags.go:64] FLAG: --register-node="true" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446735 5173 flags.go:64] FLAG: --register-schedulable="true" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446741 5173 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446751 5173 flags.go:64] FLAG: --registry-burst="10" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446757 5173 flags.go:64] FLAG: --registry-qps="5" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446762 5173 flags.go:64] FLAG: --reserved-cpus="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446767 5173 flags.go:64] FLAG: --reserved-memory="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446773 5173 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446778 5173 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446783 5173 flags.go:64] FLAG: --rotate-certificates="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446788 5173 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446794 5173 flags.go:64] FLAG: --runonce="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446798 5173 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446809 5173 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446814 5173 flags.go:64] FLAG: --seccomp-default="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446819 5173 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446823 5173 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446829 5173 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446833 5173 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446839 5173 flags.go:64] FLAG: --storage-driver-password="root" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446847 5173 flags.go:64] FLAG: --storage-driver-secure="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446852 5173 flags.go:64] FLAG: --storage-driver-table="stats" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446857 5173 flags.go:64] FLAG: --storage-driver-user="root" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446862 5173 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446867 5173 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446872 5173 flags.go:64] FLAG: --system-cgroups="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446877 5173 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446885 5173 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446890 5173 flags.go:64] FLAG: --tls-cert-file="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446896 5173 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446902 5173 flags.go:64] FLAG: --tls-min-version="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446907 5173 flags.go:64] FLAG: --tls-private-key-file="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446911 5173 flags.go:64] FLAG: --topology-manager-policy="none" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446915 5173 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446920 5173 flags.go:64] FLAG: --topology-manager-scope="container" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446925 5173 flags.go:64] FLAG: --v="2" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446932 5173 flags.go:64] FLAG: --version="false" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446937 5173 flags.go:64] FLAG: --vmodule="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446944 5173 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.446948 5173 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447058 5173 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447064 5173 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447069 5173 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447073 5173 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447078 5173 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447086 5173 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447090 5173 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447094 5173 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447099 5173 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447103 5173 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447107 5173 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447111 5173 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447118 5173 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447123 5173 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447127 5173 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447133 5173 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447138 5173 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447143 5173 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447147 5173 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447152 5173 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447157 5173 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447161 5173 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447165 5173 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447168 5173 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447172 5173 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447176 5173 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447180 5173 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447184 5173 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447188 5173 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447192 5173 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447196 5173 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447200 5173 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447203 5173 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447206 5173 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447210 5173 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447213 5173 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447216 5173 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447223 5173 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447226 5173 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447229 5173 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447233 5173 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447236 5173 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447241 5173 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447245 5173 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447251 5173 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447254 5173 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447258 5173 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447261 5173 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447264 5173 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447268 5173 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447271 5173 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447274 5173 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447278 5173 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447281 5173 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447284 5173 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447288 5173 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447291 5173 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447294 5173 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447298 5173 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447301 5173 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447304 5173 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447308 5173 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447311 5173 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447314 5173 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447317 5173 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447321 5173 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447324 5173 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447328 5173 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447331 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447336 5173 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447340 5173 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447344 5173 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447351 5173 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447355 5173 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447378 5173 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447385 5173 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447395 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447399 5173 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447403 5173 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447407 5173 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447410 5173 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447413 5173 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447416 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447419 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447423 5173 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.447426 5173 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.447617 5173 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.457899 5173 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.457981 5173 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458063 5173 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458074 5173 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458079 5173 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458085 5173 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458090 5173 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458095 5173 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458099 5173 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458104 5173 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458147 5173 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458154 5173 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458160 5173 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458165 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458170 5173 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458175 5173 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458181 5173 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458186 5173 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458193 5173 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458198 5173 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458203 5173 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458209 5173 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458215 5173 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458219 5173 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458224 5173 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458229 5173 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458233 5173 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458238 5173 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458242 5173 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458247 5173 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458251 5173 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458256 5173 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458261 5173 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458268 5173 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458273 5173 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458277 5173 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458281 5173 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458285 5173 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458289 5173 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458293 5173 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458298 5173 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458302 5173 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458307 5173 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458312 5173 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458316 5173 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458321 5173 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458328 5173 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458333 5173 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458338 5173 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458343 5173 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458348 5173 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458352 5173 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458357 5173 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458385 5173 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458390 5173 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458396 5173 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458400 5173 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458405 5173 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458410 5173 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458414 5173 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458419 5173 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458423 5173 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458428 5173 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458433 5173 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458438 5173 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458443 5173 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458450 5173 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458458 5173 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458463 5173 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458469 5173 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458474 5173 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458478 5173 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458483 5173 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458488 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458493 5173 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458498 5173 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458503 5173 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458508 5173 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458514 5173 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458520 5173 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458526 5173 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458537 5173 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458542 5173 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458548 5173 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458554 5173 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458560 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458565 5173 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458570 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.458580 5173 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458757 5173 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458770 5173 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458775 5173 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458781 5173 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458786 5173 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458791 5173 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458796 5173 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458801 5173 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458805 5173 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458810 5173 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458816 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458821 5173 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458826 5173 feature_gate.go:328] unrecognized feature gate: Example2 Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458831 5173 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458836 5173 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458841 5173 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458846 5173 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458851 5173 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458856 5173 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458860 5173 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458865 5173 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458873 5173 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458878 5173 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458883 5173 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458888 5173 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458893 5173 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458898 5173 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458902 5173 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458907 5173 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458913 5173 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458917 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458922 5173 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458927 5173 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458932 5173 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458936 5173 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458940 5173 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458945 5173 feature_gate.go:328] unrecognized feature gate: Example Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458950 5173 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458954 5173 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458959 5173 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458963 5173 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458969 5173 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458973 5173 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458979 5173 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458984 5173 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458989 5173 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458993 5173 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.458998 5173 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459003 5173 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459007 5173 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459012 5173 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459016 5173 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459021 5173 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459028 5173 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459033 5173 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459038 5173 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459042 5173 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459047 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459051 5173 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459056 5173 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459061 5173 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459066 5173 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459072 5173 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459077 5173 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459082 5173 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459086 5173 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459091 5173 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459095 5173 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459100 5173 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459104 5173 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459109 5173 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459113 5173 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459117 5173 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459123 5173 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459127 5173 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459132 5173 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459139 5173 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459144 5173 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459149 5173 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459154 5173 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459160 5173 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459165 5173 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459170 5173 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459176 5173 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459180 5173 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 09 14:11:57 crc kubenswrapper[5173]: W1209 14:11:57.459186 5173 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.459195 5173 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.459701 5173 server.go:962] "Client rotation is on, will bootstrap in background" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.463333 5173 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.466998 5173 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.467232 5173 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.467800 5173 server.go:1019] "Starting client certificate rotation" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.468141 5173 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.471815 5173 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.488958 5173 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.490892 5173 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.491538 5173 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.501746 5173 log.go:25] "Validated CRI v1 runtime API" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.559699 5173 log.go:25] "Validated CRI v1 image API" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.561508 5173 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.566423 5173 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-09-14-05-55-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.566465 5173 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.608676 5173 manager.go:217] Machine: {Timestamp:2025-12-09 14:11:57.58282308 +0000 UTC m=+0.508105347 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:b723954a-7a7f-4e69-bb6f-4921ffb1c94e BootID:7d8a1fb4-b79b-40c8-87ab-701c2aec36f3 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:cb:3e:f7 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:cb:3e:f7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:08:0c:f5 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:14:f1:15 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:79:e4:9d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:4b:4f:41 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e2:8f:74:b1:34:7f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7e:f5:3c:89:b9:a2 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.609008 5173 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.609323 5173 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.611423 5173 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.611493 5173 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.611742 5173 topology_manager.go:138] "Creating topology manager with none policy" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.611755 5173 container_manager_linux.go:306] "Creating device plugin manager" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.611779 5173 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.613424 5173 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.613988 5173 state_mem.go:36] "Initialized new in-memory state store" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.614199 5173 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.614793 5173 kubelet.go:491] "Attempting to sync node with API server" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.614822 5173 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.614844 5173 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.614862 5173 kubelet.go:397] "Adding apiserver pod source" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.614887 5173 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.617526 5173 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.617547 5173 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.618457 5173 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.618473 5173 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.620023 5173 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.620355 5173 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.620706 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.620761 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621034 5173 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621433 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621462 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621470 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621477 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621485 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621493 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621500 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621508 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621516 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621527 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621538 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.621657 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.622161 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.622178 5173 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.624010 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.654724 5173 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.654834 5173 server.go:1295] "Started kubelet" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.655040 5173 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.655155 5173 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.655304 5173 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.656278 5173 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 09 14:11:57 crc systemd[1]: Started Kubernetes Kubelet. Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.657396 5173 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.657477 5173 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.658378 5173 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.658388 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.658409 5173 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.658402 5173 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.659090 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.673291 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="200ms" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.674086 5173 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.674137 5173 factory.go:55] Registering systemd factory Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.674147 5173 factory.go:223] Registration of the systemd container factory successfully Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.673992 5173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f9176a188acdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.654781151 +0000 UTC m=+0.580063418,LastTimestamp:2025-12-09 14:11:57.654781151 +0000 UTC m=+0.580063418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.675235 5173 factory.go:153] Registering CRI-O factory Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.675538 5173 factory.go:223] Registration of the crio container factory successfully Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.675607 5173 server.go:317] "Adding debug handlers to kubelet server" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.675613 5173 factory.go:103] Registering Raw factory Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.676115 5173 manager.go:1196] Started watching for new ooms in manager Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.676922 5173 manager.go:319] Starting recovery of all containers Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.687853 5173 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.710503 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711070 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711085 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711097 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711109 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711120 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711130 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711142 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711157 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711167 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711177 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711215 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711226 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711236 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711271 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711283 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711292 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711303 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711313 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711323 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711332 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711343 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711379 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711394 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711405 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711415 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711443 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711454 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711470 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711480 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711503 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711513 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711527 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711538 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711550 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711560 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711571 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711588 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711603 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711615 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711625 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711647 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711656 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711666 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711676 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711685 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711700 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711710 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711720 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711732 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711743 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711753 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711763 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711774 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711788 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711805 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711822 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711831 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711848 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711858 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711868 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.711877 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712142 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712153 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712167 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712179 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712190 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712207 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712217 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712261 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712271 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712281 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712290 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712301 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712312 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712321 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712331 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712341 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712355 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712395 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712407 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712417 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712427 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712435 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712445 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712455 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712465 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712475 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712486 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712500 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712529 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712540 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712553 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712563 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712577 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712608 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712620 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712631 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712643 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712654 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712666 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712677 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712694 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712708 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712724 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712737 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712754 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712765 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712775 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712785 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712796 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712805 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712827 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712839 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712849 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712859 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712869 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712884 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712900 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712911 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712923 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712933 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712945 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712957 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712967 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712978 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.712989 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713000 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713011 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713022 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713037 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713057 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713075 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713094 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713109 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713124 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713133 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713146 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713156 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713166 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713178 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713188 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713200 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713209 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713220 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713230 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713240 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713251 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713265 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713276 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713287 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713297 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713314 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713324 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713334 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713345 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713373 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713384 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713395 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713414 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713424 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713437 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713448 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713457 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713469 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713480 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713490 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713502 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713511 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713520 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713531 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713541 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713551 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713561 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713574 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713584 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713594 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713605 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713615 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713625 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713634 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713644 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713653 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713663 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713677 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713688 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713700 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713711 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713722 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713733 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713745 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713755 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713766 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713777 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713786 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713795 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713805 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713815 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713825 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713834 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713844 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713855 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713865 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713875 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713885 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713896 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713912 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713923 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713933 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713943 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713952 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713962 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713972 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713980 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.713990 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714000 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714010 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714020 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714029 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714040 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714050 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714060 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714073 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714082 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714187 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714200 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714217 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714227 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714238 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714250 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714266 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.714389 5173 manager.go:324] Recovery completed Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715235 5173 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715275 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715289 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715300 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715311 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715321 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715332 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715342 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715355 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715485 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715499 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715509 5173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715519 5173 reconstruct.go:97] "Volume reconstruction finished" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.715527 5173 reconciler.go:26] "Reconciler: start to sync state" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.715887 5173 watcher.go:152] Failed to watch directory "/sys/fs/cgroup/system.slice/crc-routes-controller.service": inotify_add_watch /sys/fs/cgroup/system.slice/crc-routes-controller.service: no such file or directory Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.732041 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.733681 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.733729 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.733742 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.735776 5173 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.735902 5173 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.736071 5173 state_mem.go:36] "Initialized new in-memory state store" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.758731 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.859646 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.869228 5173 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.869296 5173 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.869335 5173 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 09 14:11:57 crc kubenswrapper[5173]: I1209 14:11:57.869346 5173 kubelet.go:2451] "Starting kubelet main sync loop" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.869476 5173 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.872678 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.874187 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="400ms" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.959777 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:11:57 crc kubenswrapper[5173]: E1209 14:11:57.970108 5173 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.029924 5173 policy_none.go:49] "None policy: Start" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.029973 5173 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.029997 5173 state_mem.go:35] "Initializing new in-memory state store" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.060493 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.161225 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.170400 5173 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.178525 5173 manager.go:341] "Starting Device Plugin manager" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.178837 5173 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.178860 5173 server.go:85] "Starting device plugin registration server" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.179415 5173 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.179435 5173 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.179587 5173 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.179751 5173 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.179762 5173 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.184087 5173 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.184130 5173 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.274843 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="800ms" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.279996 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.281280 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.281331 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.281345 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.281420 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.281900 5173 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.460868 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.482771 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.483828 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.483874 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.483898 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.483929 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.484399 5173 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.571203 5173 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.571523 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.573620 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.573695 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.573714 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.574757 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.574933 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.574991 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.575802 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.575852 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.575866 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.575806 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.575942 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.575960 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.576841 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.577155 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.577190 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.577627 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.577652 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.577662 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.578385 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.578803 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.578852 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.578871 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.578875 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.578986 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.579147 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.579192 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.579204 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.579879 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.579923 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.579939 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.579984 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.580104 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.580147 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.580469 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.580504 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.580517 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.580806 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.580836 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.580848 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.581302 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.581345 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.581883 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.581920 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.581933 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.604660 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.611713 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.625982 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.629458 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.645235 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.649976 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.728114 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.728547 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.728584 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.728613 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.728638 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.728658 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.728713 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.728735 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.728838 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.728933 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.728982 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729015 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729040 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729065 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729085 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729107 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729130 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729144 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729158 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729183 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729216 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729243 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729245 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729477 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729516 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729564 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729568 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729599 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.729705 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.730337 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.830510 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.830598 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.830620 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.830645 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.830715 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.830787 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.830888 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.830900 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.830926 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.830947 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.830991 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831079 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831084 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831142 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831106 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831214 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831254 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831221 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831287 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831320 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831377 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831422 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831319 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831425 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831379 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831492 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831510 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831545 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831573 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831617 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831652 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.831798 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.884967 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.886026 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.886069 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.886080 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.886102 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.886660 5173 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.891708 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.905597 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.912060 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.930922 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.946069 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: W1209 14:11:58.949700 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-fc0f59654d1ba03b4ecb9aebb6ead325554e4a6cc3c5280895dc2dcca3e1360c WatchSource:0}: Error finding container fc0f59654d1ba03b4ecb9aebb6ead325554e4a6cc3c5280895dc2dcca3e1360c: Status 404 returned error can't find the container with id fc0f59654d1ba03b4ecb9aebb6ead325554e4a6cc3c5280895dc2dcca3e1360c Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.950888 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 14:11:58 crc kubenswrapper[5173]: W1209 14:11:58.954777 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-3ff35de495ca1a9741a3660e95ab55e17dce3bd6322455cf1f28b40f3e4f609c WatchSource:0}: Error finding container 3ff35de495ca1a9741a3660e95ab55e17dce3bd6322455cf1f28b40f3e4f609c: Status 404 returned error can't find the container with id 3ff35de495ca1a9741a3660e95ab55e17dce3bd6322455cf1f28b40f3e4f609c Dec 09 14:11:58 crc kubenswrapper[5173]: I1209 14:11:58.965245 5173 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:11:58 crc kubenswrapper[5173]: W1209 14:11:58.965913 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-54c027a5f3b890bb1cb8e458563c04ee0105f10d9056e28474cb49e57e0c90c8 WatchSource:0}: Error finding container 54c027a5f3b890bb1cb8e458563c04ee0105f10d9056e28474cb49e57e0c90c8: Status 404 returned error can't find the container with id 54c027a5f3b890bb1cb8e458563c04ee0105f10d9056e28474cb49e57e0c90c8 Dec 09 14:11:58 crc kubenswrapper[5173]: W1209 14:11:58.974938 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-10eebdf96e958d9b62546b70599bf364ba266bccc560e249aaa0e21bcc6e78de WatchSource:0}: Error finding container 10eebdf96e958d9b62546b70599bf364ba266bccc560e249aaa0e21bcc6e78de: Status 404 returned error can't find the container with id 10eebdf96e958d9b62546b70599bf364ba266bccc560e249aaa0e21bcc6e78de Dec 09 14:11:58 crc kubenswrapper[5173]: E1209 14:11:58.985094 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:11:59 crc kubenswrapper[5173]: E1209 14:11:59.076165 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="1.6s" Dec 09 14:11:59 crc kubenswrapper[5173]: E1209 14:11:59.200711 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:11:59 crc kubenswrapper[5173]: I1209 14:11:59.623177 5173 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:11:59 crc kubenswrapper[5173]: E1209 14:11:59.624472 5173 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 09 14:11:59 crc kubenswrapper[5173]: I1209 14:11:59.624838 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Dec 09 14:11:59 crc kubenswrapper[5173]: I1209 14:11:59.687044 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:11:59 crc kubenswrapper[5173]: I1209 14:11:59.690297 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:11:59 crc kubenswrapper[5173]: I1209 14:11:59.690377 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:11:59 crc kubenswrapper[5173]: I1209 14:11:59.690393 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:11:59 crc kubenswrapper[5173]: I1209 14:11:59.690438 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:11:59 crc kubenswrapper[5173]: E1209 14:11:59.691039 5173 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Dec 09 14:11:59 crc kubenswrapper[5173]: I1209 14:11:59.877930 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"10eebdf96e958d9b62546b70599bf364ba266bccc560e249aaa0e21bcc6e78de"} Dec 09 14:11:59 crc kubenswrapper[5173]: I1209 14:11:59.878835 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"54c027a5f3b890bb1cb8e458563c04ee0105f10d9056e28474cb49e57e0c90c8"} Dec 09 14:11:59 crc kubenswrapper[5173]: I1209 14:11:59.879820 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"3ff35de495ca1a9741a3660e95ab55e17dce3bd6322455cf1f28b40f3e4f609c"} Dec 09 14:11:59 crc kubenswrapper[5173]: I1209 14:11:59.881050 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"fc0f59654d1ba03b4ecb9aebb6ead325554e4a6cc3c5280895dc2dcca3e1360c"} Dec 09 14:11:59 crc kubenswrapper[5173]: I1209 14:11:59.882344 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"cb06f10d820889753708c6c665010a463a7657bb6a17e25194c39be2e1262c61"} Dec 09 14:12:00 crc kubenswrapper[5173]: E1209 14:12:00.208801 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.625559 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Dec 09 14:12:00 crc kubenswrapper[5173]: E1209 14:12:00.646403 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:12:00 crc kubenswrapper[5173]: E1209 14:12:00.677393 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="3.2s" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.889137 5173 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784" exitCode=0 Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.889244 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784"} Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.889457 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.893046 5173 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8" exitCode=0 Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.893101 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8"} Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.893203 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.893229 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.893239 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:00 crc kubenswrapper[5173]: E1209 14:12:00.893577 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.894129 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.896416 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.896469 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.896485 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:00 crc kubenswrapper[5173]: E1209 14:12:00.896772 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.898612 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ded317057b16388136754d75b632a51e96153d2e647d0b58e89ac5f3732b778d"} Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.898655 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"3b658001d1e245caf6af8b7e926021b65cf14fe05e112bd9f5ef1b3b34dbc397"} Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.900297 5173 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124" exitCode=0 Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.900381 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124"} Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.901012 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.904216 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.904283 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.904304 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:00 crc kubenswrapper[5173]: E1209 14:12:00.904602 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.905280 5173 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20" exitCode=0 Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.905341 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20"} Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.905606 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.906162 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.906194 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.906206 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:00 crc kubenswrapper[5173]: E1209 14:12:00.906462 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.910313 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.911263 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.911293 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:00 crc kubenswrapper[5173]: I1209 14:12:00.911304 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:01 crc kubenswrapper[5173]: E1209 14:12:01.033669 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:12:01 crc kubenswrapper[5173]: I1209 14:12:01.291578 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:01 crc kubenswrapper[5173]: I1209 14:12:01.292809 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:01 crc kubenswrapper[5173]: I1209 14:12:01.292859 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:01 crc kubenswrapper[5173]: I1209 14:12:01.292881 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:01 crc kubenswrapper[5173]: I1209 14:12:01.292913 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:12:01 crc kubenswrapper[5173]: E1209 14:12:01.293552 5173 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Dec 09 14:12:01 crc kubenswrapper[5173]: E1209 14:12:01.408448 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:12:01 crc kubenswrapper[5173]: I1209 14:12:01.625848 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.625711 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.911597 5173 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92" exitCode=0 Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.911680 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92"} Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.911875 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.913013 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.913046 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.913054 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:02 crc kubenswrapper[5173]: E1209 14:12:02.913597 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.916121 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"24bfde81209161c48b816ce80d7a29d805ef660302aa6b8a9350fc545c7f8727"} Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.916155 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"16ff70fe83260431bb761ab05817e149e47d0aa9773fad494524d389e0eb98ef"} Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.916168 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"22111dcf300ec536cc6a1016634e372dd581bfc8c1965f1ef72025eca7bd27a2"} Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.916289 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.918467 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.918495 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.918508 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:02 crc kubenswrapper[5173]: E1209 14:12:02.918776 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.925263 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"004085d552ba1c7640d1262d02bd33a94f35afa0dcfa640e560588a800163b1f"} Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.925315 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"8f547532154a93a64f89399378cd1ddf1d539f5ccdf318f5358ab3393b1a30ae"} Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.925400 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.925923 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.925984 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.926000 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:02 crc kubenswrapper[5173]: E1209 14:12:02.926254 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.928616 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"843a523bdd75f421c91ce69ed248e099d8a783680b394eca105778950f9d908f"} Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.928673 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.930724 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.930829 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.930887 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:02 crc kubenswrapper[5173]: E1209 14:12:02.931095 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.933478 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd"} Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.933563 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d"} Dec 09 14:12:02 crc kubenswrapper[5173]: I1209 14:12:02.933621 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f"} Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.756218 5173 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.940331 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7e13482d2d36afa9cca61511ca25482c668c74519e473bc0187e69169c932e84"} Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.940410 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7"} Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.940711 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.941637 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.941715 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.941741 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:03 crc kubenswrapper[5173]: E1209 14:12:03.942169 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.943467 5173 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a" exitCode=0 Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.943512 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a"} Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.943653 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.943665 5173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.943690 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.943758 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.943702 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.944322 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.944408 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.944407 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.944455 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.944435 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.944488 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.944544 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.944495 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.944616 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:03 crc kubenswrapper[5173]: E1209 14:12:03.944955 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:03 crc kubenswrapper[5173]: E1209 14:12:03.945159 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.945280 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.945318 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:03 crc kubenswrapper[5173]: I1209 14:12:03.945334 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:03 crc kubenswrapper[5173]: E1209 14:12:03.945546 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:03 crc kubenswrapper[5173]: E1209 14:12:03.945631 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.494346 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.495638 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.495705 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.495719 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.495777 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.952250 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"a1501e862b689b4aabc3ad6a8aa5f8021ccdf06efb17e8f190b8a58d3a57b778"} Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.952307 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"40690c3e060def2a504e5e96407e7e684a5d65be6a03e3c0c2964c5613ac3a80"} Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.952320 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"07cb68ad1d7939b032d461e4405874dbea3c0c580d711c636b9c1bc98534ddad"} Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.952331 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"54680e71891f8c4b8d3378c6a2cebfadccf93498ccbb0cf6da1b23063f9256eb"} Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.957568 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.957633 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.958475 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.958509 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:04 crc kubenswrapper[5173]: I1209 14:12:04.958522 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:04 crc kubenswrapper[5173]: E1209 14:12:04.958985 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.052617 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.053017 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.054020 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.054091 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.054106 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:05 crc kubenswrapper[5173]: E1209 14:12:05.054539 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.588904 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.589229 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.590659 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.590712 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.590723 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:05 crc kubenswrapper[5173]: E1209 14:12:05.591200 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.962881 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"78cdb950caf4d3cbe020e51b49b41823961f04a520144ddc0f055b1ac4015773"} Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.962991 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.963269 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.967479 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.967564 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.967583 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.967487 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.967719 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:05 crc kubenswrapper[5173]: I1209 14:12:05.967743 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:05 crc kubenswrapper[5173]: E1209 14:12:05.971230 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:05 crc kubenswrapper[5173]: E1209 14:12:05.971502 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:06 crc kubenswrapper[5173]: I1209 14:12:06.368892 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:12:06 crc kubenswrapper[5173]: I1209 14:12:06.804726 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:12:06 crc kubenswrapper[5173]: I1209 14:12:06.965624 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:06 crc kubenswrapper[5173]: I1209 14:12:06.965702 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:06 crc kubenswrapper[5173]: I1209 14:12:06.966967 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:06 crc kubenswrapper[5173]: I1209 14:12:06.967026 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:06 crc kubenswrapper[5173]: I1209 14:12:06.967054 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:06 crc kubenswrapper[5173]: I1209 14:12:06.967057 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:06 crc kubenswrapper[5173]: I1209 14:12:06.967065 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:06 crc kubenswrapper[5173]: I1209 14:12:06.967093 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:06 crc kubenswrapper[5173]: E1209 14:12:06.967912 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:06 crc kubenswrapper[5173]: E1209 14:12:06.968140 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:07 crc kubenswrapper[5173]: I1209 14:12:07.513067 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 09 14:12:07 crc kubenswrapper[5173]: I1209 14:12:07.968451 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:07 crc kubenswrapper[5173]: I1209 14:12:07.968451 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:07 crc kubenswrapper[5173]: I1209 14:12:07.969030 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:07 crc kubenswrapper[5173]: I1209 14:12:07.969066 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:07 crc kubenswrapper[5173]: I1209 14:12:07.969078 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:07 crc kubenswrapper[5173]: E1209 14:12:07.969506 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:07 crc kubenswrapper[5173]: I1209 14:12:07.969619 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:07 crc kubenswrapper[5173]: I1209 14:12:07.969745 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:07 crc kubenswrapper[5173]: I1209 14:12:07.969761 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:07 crc kubenswrapper[5173]: E1209 14:12:07.970280 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:08 crc kubenswrapper[5173]: E1209 14:12:08.184329 5173 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:12:09 crc kubenswrapper[5173]: I1209 14:12:09.395809 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:12:09 crc kubenswrapper[5173]: I1209 14:12:09.396203 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:09 crc kubenswrapper[5173]: I1209 14:12:09.397610 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:09 crc kubenswrapper[5173]: I1209 14:12:09.397654 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:09 crc kubenswrapper[5173]: I1209 14:12:09.397667 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:09 crc kubenswrapper[5173]: E1209 14:12:09.398105 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.157442 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.157965 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.159533 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.159612 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.159640 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:12 crc kubenswrapper[5173]: E1209 14:12:12.160264 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.165433 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.308810 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.320259 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.396461 5173 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.396623 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.983976 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.985417 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.985475 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:12 crc kubenswrapper[5173]: I1209 14:12:12.985488 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:12 crc kubenswrapper[5173]: E1209 14:12:12.985893 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:13 crc kubenswrapper[5173]: I1209 14:12:13.626634 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 09 14:12:13 crc kubenswrapper[5173]: E1209 14:12:13.758387 5173 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 09 14:12:13 crc kubenswrapper[5173]: E1209 14:12:13.878546 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 09 14:12:13 crc kubenswrapper[5173]: I1209 14:12:13.986548 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:13 crc kubenswrapper[5173]: I1209 14:12:13.987370 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:13 crc kubenswrapper[5173]: I1209 14:12:13.987445 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:13 crc kubenswrapper[5173]: I1209 14:12:13.987460 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:13 crc kubenswrapper[5173]: E1209 14:12:13.988211 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.134516 5173 trace.go:236] Trace[724130965]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:12:04.132) (total time: 10001ms): Dec 09 14:12:14 crc kubenswrapper[5173]: Trace[724130965]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:12:14.134) Dec 09 14:12:14 crc kubenswrapper[5173]: Trace[724130965]: [10.001766168s] [10.001766168s] END Dec 09 14:12:14 crc kubenswrapper[5173]: E1209 14:12:14.134574 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.405611 5173 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.405710 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.411959 5173 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.412011 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.448519 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.448875 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.450040 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.450102 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.450119 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:14 crc kubenswrapper[5173]: E1209 14:12:14.450876 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.487752 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.989181 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.989895 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.989931 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:14 crc kubenswrapper[5173]: I1209 14:12:14.989945 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:14 crc kubenswrapper[5173]: E1209 14:12:14.990433 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:15 crc kubenswrapper[5173]: I1209 14:12:15.009943 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 09 14:12:15 crc kubenswrapper[5173]: I1209 14:12:15.992833 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:15 crc kubenswrapper[5173]: I1209 14:12:15.993728 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:15 crc kubenswrapper[5173]: I1209 14:12:15.993773 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:15 crc kubenswrapper[5173]: I1209 14:12:15.993791 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:15 crc kubenswrapper[5173]: E1209 14:12:15.994492 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:16 crc kubenswrapper[5173]: I1209 14:12:16.375584 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:12:16 crc kubenswrapper[5173]: I1209 14:12:16.375836 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:16 crc kubenswrapper[5173]: I1209 14:12:16.377671 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:16 crc kubenswrapper[5173]: I1209 14:12:16.377735 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:16 crc kubenswrapper[5173]: I1209 14:12:16.377769 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:16 crc kubenswrapper[5173]: E1209 14:12:16.378533 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:16 crc kubenswrapper[5173]: I1209 14:12:16.383566 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:12:16 crc kubenswrapper[5173]: I1209 14:12:16.995320 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:16 crc kubenswrapper[5173]: I1209 14:12:16.996605 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:16 crc kubenswrapper[5173]: I1209 14:12:16.996664 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:16 crc kubenswrapper[5173]: I1209 14:12:16.996676 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:16 crc kubenswrapper[5173]: E1209 14:12:16.997218 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:18 crc kubenswrapper[5173]: E1209 14:12:18.184521 5173 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:12:19 crc kubenswrapper[5173]: I1209 14:12:19.409833 5173 trace.go:236] Trace[359731666]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:12:06.972) (total time: 12437ms): Dec 09 14:12:19 crc kubenswrapper[5173]: Trace[359731666]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 12437ms (14:12:19.409) Dec 09 14:12:19 crc kubenswrapper[5173]: Trace[359731666]: [12.437445547s] [12.437445547s] END Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.409895 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:12:19 crc kubenswrapper[5173]: I1209 14:12:19.414916 5173 trace.go:236] Trace[574492802]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:12:04.823) (total time: 14590ms): Dec 09 14:12:19 crc kubenswrapper[5173]: Trace[574492802]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 14590ms (14:12:19.414) Dec 09 14:12:19 crc kubenswrapper[5173]: Trace[574492802]: [14.590947981s] [14.590947981s] END Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.414900 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a188acdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.654781151 +0000 UTC m=+0.580063418,LastTimestamp:2025-12-09 14:11:57.654781151 +0000 UTC m=+0.580063418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.414984 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:12:19 crc kubenswrapper[5173]: I1209 14:12:19.415167 5173 trace.go:236] Trace[1308372995]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 14:12:04.473) (total time: 14941ms): Dec 09 14:12:19 crc kubenswrapper[5173]: Trace[1308372995]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 14941ms (14:12:19.415) Dec 09 14:12:19 crc kubenswrapper[5173]: Trace[1308372995]: [14.941928182s] [14.941928182s] END Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.415198 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.417394 5173 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.417299 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63cf664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733705316 +0000 UTC m=+0.658987563,LastTimestamp:2025-12-09 14:11:57.733705316 +0000 UTC m=+0.658987563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.423567 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d6ebe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733736126 +0000 UTC m=+0.659018363,LastTimestamp:2025-12-09 14:11:57.733736126 +0000 UTC m=+0.659018363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.429794 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d97eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733746667 +0000 UTC m=+0.659028914,LastTimestamp:2025-12-09 14:11:57.733746667 +0000 UTC m=+0.659028914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.434493 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176c1387e16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:58.186397206 +0000 UTC m=+1.111679453,LastTimestamp:2025-12-09 14:11:58.186397206 +0000 UTC m=+1.111679453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.440207 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63cf664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63cf664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733705316 +0000 UTC m=+0.658987563,LastTimestamp:2025-12-09 14:11:58.281313986 +0000 UTC m=+1.206596233,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.445945 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d6ebe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d6ebe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733736126 +0000 UTC m=+0.659018363,LastTimestamp:2025-12-09 14:11:58.281339446 +0000 UTC m=+1.206621693,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.451968 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d97eb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d97eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733746667 +0000 UTC m=+0.659028914,LastTimestamp:2025-12-09 14:11:58.281354287 +0000 UTC m=+1.206636524,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: I1209 14:12:19.454495 5173 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Dec 09 14:12:19 crc kubenswrapper[5173]: I1209 14:12:19.454575 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.457463 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63cf664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63cf664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733705316 +0000 UTC m=+0.658987563,LastTimestamp:2025-12-09 14:11:58.483859277 +0000 UTC m=+1.409141544,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: I1209 14:12:19.459568 5173 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37336->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 09 14:12:19 crc kubenswrapper[5173]: I1209 14:12:19.459711 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37336->192.168.126.11:17697: read: connection reset by peer" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.462545 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d6ebe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d6ebe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733736126 +0000 UTC m=+0.659018363,LastTimestamp:2025-12-09 14:11:58.483886767 +0000 UTC m=+1.409169034,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.467848 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d97eb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d97eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733746667 +0000 UTC m=+0.659028914,LastTimestamp:2025-12-09 14:11:58.483906897 +0000 UTC m=+1.409189144,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.472642 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63cf664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63cf664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733705316 +0000 UTC m=+0.658987563,LastTimestamp:2025-12-09 14:11:58.573664759 +0000 UTC m=+1.498947006,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.477224 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d6ebe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d6ebe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733736126 +0000 UTC m=+0.659018363,LastTimestamp:2025-12-09 14:11:58.573705499 +0000 UTC m=+1.498987746,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.482073 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d97eb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d97eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733746667 +0000 UTC m=+0.659028914,LastTimestamp:2025-12-09 14:11:58.57372126 +0000 UTC m=+1.499003517,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.487608 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63cf664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63cf664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733705316 +0000 UTC m=+0.658987563,LastTimestamp:2025-12-09 14:11:58.57583684 +0000 UTC m=+1.501119087,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.493537 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d6ebe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d6ebe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733736126 +0000 UTC m=+0.659018363,LastTimestamp:2025-12-09 14:11:58.57585939 +0000 UTC m=+1.501141637,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.499287 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d97eb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d97eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733746667 +0000 UTC m=+0.659028914,LastTimestamp:2025-12-09 14:11:58.575870851 +0000 UTC m=+1.501153098,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.506941 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63cf664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63cf664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733705316 +0000 UTC m=+0.658987563,LastTimestamp:2025-12-09 14:11:58.575921742 +0000 UTC m=+1.501203989,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.514219 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d6ebe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d6ebe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733736126 +0000 UTC m=+0.659018363,LastTimestamp:2025-12-09 14:11:58.575951042 +0000 UTC m=+1.501233289,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.516819 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d97eb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d97eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733746667 +0000 UTC m=+0.659028914,LastTimestamp:2025-12-09 14:11:58.575966752 +0000 UTC m=+1.501248999,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.520566 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63cf664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63cf664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733705316 +0000 UTC m=+0.658987563,LastTimestamp:2025-12-09 14:11:58.577644655 +0000 UTC m=+1.502926902,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.525305 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d6ebe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d6ebe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733736126 +0000 UTC m=+0.659018363,LastTimestamp:2025-12-09 14:11:58.577657485 +0000 UTC m=+1.502939732,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.529996 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d97eb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d97eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733746667 +0000 UTC m=+0.659028914,LastTimestamp:2025-12-09 14:11:58.577667575 +0000 UTC m=+1.502949822,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.535009 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63cf664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63cf664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733705316 +0000 UTC m=+0.658987563,LastTimestamp:2025-12-09 14:11:58.578867308 +0000 UTC m=+1.504149545,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.541779 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f9176a63d6ebe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f9176a63d6ebe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:57.733736126 +0000 UTC m=+0.659018363,LastTimestamp:2025-12-09 14:11:58.57897391 +0000 UTC m=+1.504256157,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.548049 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f9176efaa42cc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:58.965605068 +0000 UTC m=+1.890887305,LastTimestamp:2025-12-09 14:11:58.965605068 +0000 UTC m=+1.890887305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.554841 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9176efb10e2a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:58.966050346 +0000 UTC m=+1.891332593,LastTimestamp:2025-12-09 14:11:58.966050346 +0000 UTC m=+1.891332593,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.562115 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f9176efd01c11 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:58.968085521 +0000 UTC m=+1.893367768,LastTimestamp:2025-12-09 14:11:58.968085521 +0000 UTC m=+1.893367768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.568200 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9176f0539b4c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:58.976703308 +0000 UTC m=+1.901985555,LastTimestamp:2025-12-09 14:11:58.976703308 +0000 UTC m=+1.901985555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.572654 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9176f0988a34 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:58.981220916 +0000 UTC m=+1.906503173,LastTimestamp:2025-12-09 14:11:58.981220916 +0000 UTC m=+1.906503173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.577804 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f91772688fb9d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:59.886171037 +0000 UTC m=+2.811453284,LastTimestamp:2025-12-09 14:11:59.886171037 +0000 UTC m=+2.811453284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.583682 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f917726990cf7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:59.887224055 +0000 UTC m=+2.812506302,LastTimestamp:2025-12-09 14:11:59.887224055 +0000 UTC m=+2.812506302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.590718 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f917726ab0d3f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:59.888403775 +0000 UTC m=+2.813686022,LastTimestamp:2025-12-09 14:11:59.888403775 +0000 UTC m=+2.813686022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.599819 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f917726b03e9b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:59.888744091 +0000 UTC m=+2.814026338,LastTimestamp:2025-12-09 14:11:59.888744091 +0000 UTC m=+2.814026338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.606849 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f917726b48c3a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:59.889026106 +0000 UTC m=+2.814308353,LastTimestamp:2025-12-09 14:11:59.889026106 +0000 UTC m=+2.814308353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.615828 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9177275fcc3c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:59.900249148 +0000 UTC m=+2.825531395,LastTimestamp:2025-12-09 14:11:59.900249148 +0000 UTC m=+2.825531395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.622290 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177277d7b82 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:59.902194562 +0000 UTC m=+2.827476809,LastTimestamp:2025-12-09 14:11:59.902194562 +0000 UTC m=+2.827476809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.627363 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f9177277dd7c6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:59.902218182 +0000 UTC m=+2.827500429,LastTimestamp:2025-12-09 14:11:59.902218182 +0000 UTC m=+2.827500429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: I1209 14:12:19.628424 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.633000 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f91772794833b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:59.903703867 +0000 UTC m=+2.828986124,LastTimestamp:2025-12-09 14:11:59.903703867 +0000 UTC m=+2.828986124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.640052 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f91772797e1dd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:59.903924701 +0000 UTC m=+2.829206948,LastTimestamp:2025-12-09 14:11:59.903924701 +0000 UTC m=+2.829206948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.647512 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f917727ba4fd7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:11:59.906181079 +0000 UTC m=+2.831463326,LastTimestamp:2025-12-09 14:11:59.906181079 +0000 UTC m=+2.831463326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.652903 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f917749bac64e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:00.47663675 +0000 UTC m=+3.401919007,LastTimestamp:2025-12-09 14:12:00.47663675 +0000 UTC m=+3.401919007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.658048 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f91774ba77334 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:00.508924724 +0000 UTC m=+3.434206961,LastTimestamp:2025-12-09 14:12:00.508924724 +0000 UTC m=+3.434206961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.665707 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f91774bbee808 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:00.51046196 +0000 UTC m=+3.435744207,LastTimestamp:2025-12-09 14:12:00.51046196 +0000 UTC m=+3.435744207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.671887 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f917762b0e8b5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:00.895420597 +0000 UTC m=+3.820702864,LastTimestamp:2025-12-09 14:12:00.895420597 +0000 UTC m=+3.820702864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.677297 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f917762d39321 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:00.897692449 +0000 UTC m=+3.822974686,LastTimestamp:2025-12-09 14:12:00.897692449 +0000 UTC m=+3.822974686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.682489 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f9177635d325f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:00.906711647 +0000 UTC m=+3.831993894,LastTimestamp:2025-12-09 14:12:00.906711647 +0000 UTC m=+3.831993894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.689921 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f91776391eb93 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:00.910166931 +0000 UTC m=+3.835449178,LastTimestamp:2025-12-09 14:12:00.910166931 +0000 UTC m=+3.835449178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.695494 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177a8afe739 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.069759801 +0000 UTC m=+4.995042048,LastTimestamp:2025-12-09 14:12:02.069759801 +0000 UTC m=+4.995042048,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.700644 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f9177a8c57844 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.071173188 +0000 UTC m=+4.996455435,LastTimestamp:2025-12-09 14:12:02.071173188 +0000 UTC m=+4.996455435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.705981 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9177a8c9405e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.071421022 +0000 UTC m=+4.996703269,LastTimestamp:2025-12-09 14:12:02.071421022 +0000 UTC m=+4.996703269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.710631 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9177a8cdf082 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.071728258 +0000 UTC m=+4.997010505,LastTimestamp:2025-12-09 14:12:02.071728258 +0000 UTC m=+4.997010505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.715949 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f9177aa83dc06 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.100427782 +0000 UTC m=+5.025710039,LastTimestamp:2025-12-09 14:12:02.100427782 +0000 UTC m=+5.025710039,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.721196 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177aaeb806f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.107220079 +0000 UTC m=+5.032502326,LastTimestamp:2025-12-09 14:12:02.107220079 +0000 UTC m=+5.032502326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.727812 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9177aaeb932f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.107224879 +0000 UTC m=+5.032507126,LastTimestamp:2025-12-09 14:12:02.107224879 +0000 UTC m=+5.032507126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.732588 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9177ab1a0626 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.110268966 +0000 UTC m=+5.035551213,LastTimestamp:2025-12-09 14:12:02.110268966 +0000 UTC m=+5.035551213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.737210 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177ab1d9ead openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.110504621 +0000 UTC m=+5.035786878,LastTimestamp:2025-12-09 14:12:02.110504621 +0000 UTC m=+5.035786878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.741820 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9177b9052ca3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.343783587 +0000 UTC m=+5.269065834,LastTimestamp:2025-12-09 14:12:02.343783587 +0000 UTC m=+5.269065834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.746903 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9177ba5496b2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.365765298 +0000 UTC m=+5.291047545,LastTimestamp:2025-12-09 14:12:02.365765298 +0000 UTC m=+5.291047545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.753112 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9177ba71c895 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.367678613 +0000 UTC m=+5.292960860,LastTimestamp:2025-12-09 14:12:02.367678613 +0000 UTC m=+5.292960860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.757973 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177bde920e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.425831657 +0000 UTC m=+5.351113904,LastTimestamp:2025-12-09 14:12:02.425831657 +0000 UTC m=+5.351113904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.763795 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f9177be856021 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.436071457 +0000 UTC m=+5.361353704,LastTimestamp:2025-12-09 14:12:02.436071457 +0000 UTC m=+5.361353704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.769636 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177bf01f8be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.44423699 +0000 UTC m=+5.369519237,LastTimestamp:2025-12-09 14:12:02.44423699 +0000 UTC m=+5.369519237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.774711 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177bf1705b6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.445616566 +0000 UTC m=+5.370898813,LastTimestamp:2025-12-09 14:12:02.445616566 +0000 UTC m=+5.370898813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.779663 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f9177bfaaa3b6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.455290806 +0000 UTC m=+5.380573053,LastTimestamp:2025-12-09 14:12:02.455290806 +0000 UTC m=+5.380573053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.784236 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f9177bfd610a3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.458136739 +0000 UTC m=+5.383418996,LastTimestamp:2025-12-09 14:12:02.458136739 +0000 UTC m=+5.383418996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.790343 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9177c8b629ba openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.607040954 +0000 UTC m=+5.532323201,LastTimestamp:2025-12-09 14:12:02.607040954 +0000 UTC m=+5.532323201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.795288 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9177c8d7b455 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.609239125 +0000 UTC m=+5.534521372,LastTimestamp:2025-12-09 14:12:02.609239125 +0000 UTC m=+5.534521372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.799218 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f9177c9d24173 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.625659251 +0000 UTC m=+5.550941498,LastTimestamp:2025-12-09 14:12:02.625659251 +0000 UTC m=+5.550941498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.803762 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177cd3a0971 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.682792305 +0000 UTC m=+5.608074552,LastTimestamp:2025-12-09 14:12:02.682792305 +0000 UTC m=+5.608074552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.808089 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f9177cd9dcde2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.689330658 +0000 UTC m=+5.614612905,LastTimestamp:2025-12-09 14:12:02.689330658 +0000 UTC m=+5.614612905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.813111 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177ce7ede45 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.704080453 +0000 UTC m=+5.629362700,LastTimestamp:2025-12-09 14:12:02.704080453 +0000 UTC m=+5.629362700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.817400 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177ce94886d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.705500269 +0000 UTC m=+5.630782516,LastTimestamp:2025-12-09 14:12:02.705500269 +0000 UTC m=+5.630782516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.822214 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f9177cea485bf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.706548159 +0000 UTC m=+5.631830406,LastTimestamp:2025-12-09 14:12:02.706548159 +0000 UTC m=+5.631830406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.827297 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9177db129f8b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.915090315 +0000 UTC m=+5.840372562,LastTimestamp:2025-12-09 14:12:02.915090315 +0000 UTC m=+5.840372562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.832407 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177dbaed636 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.925327926 +0000 UTC m=+5.850610183,LastTimestamp:2025-12-09 14:12:02.925327926 +0000 UTC m=+5.850610183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.834034 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177ddac378c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.958710668 +0000 UTC m=+5.883992905,LastTimestamp:2025-12-09 14:12:02.958710668 +0000 UTC m=+5.883992905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.838381 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177ddc01562 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.960012642 +0000 UTC m=+5.885294889,LastTimestamp:2025-12-09 14:12:02.960012642 +0000 UTC m=+5.885294889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.842690 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9177eb0373dd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:03.182531549 +0000 UTC m=+6.107813796,LastTimestamp:2025-12-09 14:12:03.182531549 +0000 UTC m=+6.107813796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.848013 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177eb44b928 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:03.186809128 +0000 UTC m=+6.112091375,LastTimestamp:2025-12-09 14:12:03.186809128 +0000 UTC m=+6.112091375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.853066 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9177ebf14c4d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:03.198118989 +0000 UTC m=+6.123401236,LastTimestamp:2025-12-09 14:12:03.198118989 +0000 UTC m=+6.123401236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.858185 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177ec40327e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:03.203289726 +0000 UTC m=+6.128571973,LastTimestamp:2025-12-09 14:12:03.203289726 +0000 UTC m=+6.128571973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.869255 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91781890f792 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:03.946780562 +0000 UTC m=+6.872062809,LastTimestamp:2025-12-09 14:12:03.946780562 +0000 UTC m=+6.872062809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.874751 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91782573e595 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:04.162979221 +0000 UTC m=+7.088261468,LastTimestamp:2025-12-09 14:12:04.162979221 +0000 UTC m=+7.088261468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.879294 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9178268e68b0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:04.181493936 +0000 UTC m=+7.106776183,LastTimestamp:2025-12-09 14:12:04.181493936 +0000 UTC m=+7.106776183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.884398 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f917826a4bb7b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:04.182956923 +0000 UTC m=+7.108239170,LastTimestamp:2025-12-09 14:12:04.182956923 +0000 UTC m=+7.108239170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.889233 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9178320a8927 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:04.374178087 +0000 UTC m=+7.299460334,LastTimestamp:2025-12-09 14:12:04.374178087 +0000 UTC m=+7.299460334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.894380 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f917832d152ed openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:04.387205869 +0000 UTC m=+7.312488116,LastTimestamp:2025-12-09 14:12:04.387205869 +0000 UTC m=+7.312488116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.899738 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f917832eb8436 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:04.388922422 +0000 UTC m=+7.314204669,LastTimestamp:2025-12-09 14:12:04.388922422 +0000 UTC m=+7.314204669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.904548 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91783ed72ca6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:04.588915878 +0000 UTC m=+7.514198125,LastTimestamp:2025-12-09 14:12:04.588915878 +0000 UTC m=+7.514198125,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.909522 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91783fc5172e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:04.60450795 +0000 UTC m=+7.529790197,LastTimestamp:2025-12-09 14:12:04.60450795 +0000 UTC m=+7.529790197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.916393 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91783fdc63fa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:04.606034938 +0000 UTC m=+7.531317195,LastTimestamp:2025-12-09 14:12:04.606034938 +0000 UTC m=+7.531317195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.921295 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f917852a366fb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:04.921067259 +0000 UTC m=+7.846349516,LastTimestamp:2025-12-09 14:12:04.921067259 +0000 UTC m=+7.846349516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.926392 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f9178541acc4d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:04.945669197 +0000 UTC m=+7.870951444,LastTimestamp:2025-12-09 14:12:04.945669197 +0000 UTC m=+7.870951444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.931440 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91785442f58c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:04.948301196 +0000 UTC m=+7.873583453,LastTimestamp:2025-12-09 14:12:04.948301196 +0000 UTC m=+7.873583453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.936790 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f917867b70b1e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:05.274675998 +0000 UTC m=+8.199958245,LastTimestamp:2025-12-09 14:12:05.274675998 +0000 UTC m=+8.199958245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.940335 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f91786960e7a1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:05.302585249 +0000 UTC m=+8.227867496,LastTimestamp:2025-12-09 14:12:05.302585249 +0000 UTC m=+8.227867496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.947099 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 09 14:12:19 crc kubenswrapper[5173]: &Event{ObjectMeta:{kube-controller-manager-crc.187f917a103693b0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 09 14:12:19 crc kubenswrapper[5173]: body: Dec 09 14:12:19 crc kubenswrapper[5173]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:12.396573616 +0000 UTC m=+15.321855863,LastTimestamp:2025-12-09 14:12:12.396573616 +0000 UTC m=+15.321855863,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:12:19 crc kubenswrapper[5173]: > Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.952524 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f917a10387af5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:12.396698357 +0000 UTC m=+15.321980604,LastTimestamp:2025-12-09 14:12:12.396698357 +0000 UTC m=+15.321980604,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.958320 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:12:19 crc kubenswrapper[5173]: &Event{ObjectMeta:{kube-apiserver-crc.187f917a87f70cce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 09 14:12:19 crc kubenswrapper[5173]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:12:19 crc kubenswrapper[5173]: Dec 09 14:12:19 crc kubenswrapper[5173]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:14.405676238 +0000 UTC m=+17.330958495,LastTimestamp:2025-12-09 14:12:14.405676238 +0000 UTC m=+17.330958495,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:12:19 crc kubenswrapper[5173]: > Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.963484 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f917a87f805b7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:14.405739959 +0000 UTC m=+17.331022216,LastTimestamp:2025-12-09 14:12:14.405739959 +0000 UTC m=+17.331022216,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.968650 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f917a87f70cce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:12:19 crc kubenswrapper[5173]: &Event{ObjectMeta:{kube-apiserver-crc.187f917a87f70cce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 09 14:12:19 crc kubenswrapper[5173]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 14:12:19 crc kubenswrapper[5173]: Dec 09 14:12:19 crc kubenswrapper[5173]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:14.405676238 +0000 UTC m=+17.330958495,LastTimestamp:2025-12-09 14:12:14.411992876 +0000 UTC m=+17.337275133,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:12:19 crc kubenswrapper[5173]: > Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.974144 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f917a87f805b7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f917a87f805b7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:14.405739959 +0000 UTC m=+17.331022216,LastTimestamp:2025-12-09 14:12:14.412028507 +0000 UTC m=+17.337310764,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.979862 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:12:19 crc kubenswrapper[5173]: &Event{ObjectMeta:{kube-apiserver-crc.187f917bb4e6a915 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": EOF Dec 09 14:12:19 crc kubenswrapper[5173]: body: Dec 09 14:12:19 crc kubenswrapper[5173]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:19.454544149 +0000 UTC m=+22.379826406,LastTimestamp:2025-12-09 14:12:19.454544149 +0000 UTC m=+22.379826406,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:12:19 crc kubenswrapper[5173]: > Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.984183 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f917bb4e778dc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": EOF,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:19.45459734 +0000 UTC m=+22.379879597,LastTimestamp:2025-12-09 14:12:19.45459734 +0000 UTC m=+22.379879597,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.989711 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 09 14:12:19 crc kubenswrapper[5173]: &Event{ObjectMeta:{kube-apiserver-crc.187f917bb5351a84 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:37336->192.168.126.11:17697: read: connection reset by peer Dec 09 14:12:19 crc kubenswrapper[5173]: body: Dec 09 14:12:19 crc kubenswrapper[5173]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:19.459684996 +0000 UTC m=+22.384967253,LastTimestamp:2025-12-09 14:12:19.459684996 +0000 UTC m=+22.384967253,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 09 14:12:19 crc kubenswrapper[5173]: > Dec 09 14:12:19 crc kubenswrapper[5173]: E1209 14:12:19.995294 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f917bb53708fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:37336->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:19.459811578 +0000 UTC m=+22.385093835,LastTimestamp:2025-12-09 14:12:19.459811578 +0000 UTC m=+22.385093835,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.005714 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.008153 5173 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7e13482d2d36afa9cca61511ca25482c668c74519e473bc0187e69169c932e84" exitCode=255 Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.008213 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"7e13482d2d36afa9cca61511ca25482c668c74519e473bc0187e69169c932e84"} Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.008473 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.008956 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.008990 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.009005 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:20 crc kubenswrapper[5173]: E1209 14:12:20.009396 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.009653 5173 scope.go:117] "RemoveContainer" containerID="7e13482d2d36afa9cca61511ca25482c668c74519e473bc0187e69169c932e84" Dec 09 14:12:20 crc kubenswrapper[5173]: E1209 14:12:20.016297 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f9177ddc01562\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177ddc01562 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.960012642 +0000 UTC m=+5.885294889,LastTimestamp:2025-12-09 14:12:20.011081352 +0000 UTC m=+22.936363599,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:20 crc kubenswrapper[5173]: E1209 14:12:20.224955 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f9177eb44b928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177eb44b928 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:03.186809128 +0000 UTC m=+6.112091375,LastTimestamp:2025-12-09 14:12:20.218344094 +0000 UTC m=+23.143626341,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.274436 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.274713 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.275828 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.275881 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.275892 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:20 crc kubenswrapper[5173]: E1209 14:12:20.276342 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.284457 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:12:20 crc kubenswrapper[5173]: E1209 14:12:20.284807 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:12:20 crc kubenswrapper[5173]: E1209 14:12:20.284807 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f9177ec40327e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177ec40327e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:03.203289726 +0000 UTC m=+6.128571973,LastTimestamp:2025-12-09 14:12:20.276813334 +0000 UTC m=+23.202095581,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:20 crc kubenswrapper[5173]: I1209 14:12:20.630873 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.012642 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.014108 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.014399 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3fa5d7f433b32fd022a0c84fd73dc15b76ba4376f38a44b65d1658496e9e91a6"} Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.014539 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.015078 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.015113 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.015126 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:21 crc kubenswrapper[5173]: E1209 14:12:21.015457 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.016183 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.016210 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.016221 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:21 crc kubenswrapper[5173]: E1209 14:12:21.016477 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.629840 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.780073 5173 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 09 14:12:21 crc kubenswrapper[5173]: I1209 14:12:21.801247 5173 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 09 14:12:22 crc kubenswrapper[5173]: I1209 14:12:22.018014 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 09 14:12:22 crc kubenswrapper[5173]: I1209 14:12:22.018526 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 09 14:12:22 crc kubenswrapper[5173]: I1209 14:12:22.020714 5173 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3fa5d7f433b32fd022a0c84fd73dc15b76ba4376f38a44b65d1658496e9e91a6" exitCode=255 Dec 09 14:12:22 crc kubenswrapper[5173]: I1209 14:12:22.020822 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"3fa5d7f433b32fd022a0c84fd73dc15b76ba4376f38a44b65d1658496e9e91a6"} Dec 09 14:12:22 crc kubenswrapper[5173]: I1209 14:12:22.020901 5173 scope.go:117] "RemoveContainer" containerID="7e13482d2d36afa9cca61511ca25482c668c74519e473bc0187e69169c932e84" Dec 09 14:12:22 crc kubenswrapper[5173]: I1209 14:12:22.021247 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:22 crc kubenswrapper[5173]: I1209 14:12:22.021935 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:22 crc kubenswrapper[5173]: I1209 14:12:22.021975 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:22 crc kubenswrapper[5173]: I1209 14:12:22.021990 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:22 crc kubenswrapper[5173]: E1209 14:12:22.022458 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:22 crc kubenswrapper[5173]: I1209 14:12:22.022805 5173 scope.go:117] "RemoveContainer" containerID="3fa5d7f433b32fd022a0c84fd73dc15b76ba4376f38a44b65d1658496e9e91a6" Dec 09 14:12:22 crc kubenswrapper[5173]: E1209 14:12:22.023133 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:12:22 crc kubenswrapper[5173]: E1209 14:12:22.029491 5173 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f917c4dff2fea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:22.023065578 +0000 UTC m=+24.948347825,LastTimestamp:2025-12-09 14:12:22.023065578 +0000 UTC m=+24.948347825,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:22 crc kubenswrapper[5173]: I1209 14:12:22.630170 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:22 crc kubenswrapper[5173]: I1209 14:12:22.698940 5173 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:12:23 crc kubenswrapper[5173]: I1209 14:12:23.025542 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 09 14:12:23 crc kubenswrapper[5173]: I1209 14:12:23.027649 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:23 crc kubenswrapper[5173]: I1209 14:12:23.028309 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:23 crc kubenswrapper[5173]: I1209 14:12:23.028388 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:23 crc kubenswrapper[5173]: I1209 14:12:23.028406 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:23 crc kubenswrapper[5173]: E1209 14:12:23.028811 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:23 crc kubenswrapper[5173]: I1209 14:12:23.029136 5173 scope.go:117] "RemoveContainer" containerID="3fa5d7f433b32fd022a0c84fd73dc15b76ba4376f38a44b65d1658496e9e91a6" Dec 09 14:12:23 crc kubenswrapper[5173]: E1209 14:12:23.029354 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:12:23 crc kubenswrapper[5173]: E1209 14:12:23.034012 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f917c4dff2fea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f917c4dff2fea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:22.023065578 +0000 UTC m=+24.948347825,LastTimestamp:2025-12-09 14:12:23.029318102 +0000 UTC m=+25.954600349,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:23 crc kubenswrapper[5173]: I1209 14:12:23.628742 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:24 crc kubenswrapper[5173]: I1209 14:12:24.629253 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:25 crc kubenswrapper[5173]: E1209 14:12:25.516515 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:12:25 crc kubenswrapper[5173]: I1209 14:12:25.630380 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:25 crc kubenswrapper[5173]: I1209 14:12:25.818318 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:25 crc kubenswrapper[5173]: I1209 14:12:25.820029 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:25 crc kubenswrapper[5173]: I1209 14:12:25.820085 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:25 crc kubenswrapper[5173]: I1209 14:12:25.820095 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:25 crc kubenswrapper[5173]: I1209 14:12:25.820119 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:12:25 crc kubenswrapper[5173]: E1209 14:12:25.832737 5173 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:12:26 crc kubenswrapper[5173]: I1209 14:12:26.630337 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:27 crc kubenswrapper[5173]: E1209 14:12:27.289184 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:12:27 crc kubenswrapper[5173]: E1209 14:12:27.615565 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:12:27 crc kubenswrapper[5173]: I1209 14:12:27.636714 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:28 crc kubenswrapper[5173]: E1209 14:12:28.184850 5173 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:12:28 crc kubenswrapper[5173]: I1209 14:12:28.629821 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:28 crc kubenswrapper[5173]: E1209 14:12:28.985339 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:12:29 crc kubenswrapper[5173]: I1209 14:12:29.632635 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:30 crc kubenswrapper[5173]: I1209 14:12:30.633317 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:31 crc kubenswrapper[5173]: I1209 14:12:31.014988 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:12:31 crc kubenswrapper[5173]: I1209 14:12:31.015473 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:31 crc kubenswrapper[5173]: I1209 14:12:31.018147 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:31 crc kubenswrapper[5173]: I1209 14:12:31.018234 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:31 crc kubenswrapper[5173]: I1209 14:12:31.018260 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:31 crc kubenswrapper[5173]: E1209 14:12:31.019405 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:31 crc kubenswrapper[5173]: I1209 14:12:31.020055 5173 scope.go:117] "RemoveContainer" containerID="3fa5d7f433b32fd022a0c84fd73dc15b76ba4376f38a44b65d1658496e9e91a6" Dec 09 14:12:31 crc kubenswrapper[5173]: E1209 14:12:31.020590 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:12:31 crc kubenswrapper[5173]: E1209 14:12:31.028859 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f917c4dff2fea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f917c4dff2fea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:22.023065578 +0000 UTC m=+24.948347825,LastTimestamp:2025-12-09 14:12:31.020501864 +0000 UTC m=+33.945784141,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:31 crc kubenswrapper[5173]: I1209 14:12:31.629328 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:31 crc kubenswrapper[5173]: E1209 14:12:31.641173 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:12:32 crc kubenswrapper[5173]: I1209 14:12:32.631024 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:32 crc kubenswrapper[5173]: I1209 14:12:32.833201 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:32 crc kubenswrapper[5173]: I1209 14:12:32.834610 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:32 crc kubenswrapper[5173]: I1209 14:12:32.834689 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:32 crc kubenswrapper[5173]: I1209 14:12:32.834712 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:32 crc kubenswrapper[5173]: I1209 14:12:32.834749 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:12:32 crc kubenswrapper[5173]: E1209 14:12:32.846997 5173 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:12:33 crc kubenswrapper[5173]: I1209 14:12:33.629676 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:34 crc kubenswrapper[5173]: E1209 14:12:34.297068 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:12:34 crc kubenswrapper[5173]: I1209 14:12:34.635712 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:35 crc kubenswrapper[5173]: I1209 14:12:35.631395 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:36 crc kubenswrapper[5173]: I1209 14:12:36.630475 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:37 crc kubenswrapper[5173]: I1209 14:12:37.630782 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:38 crc kubenswrapper[5173]: E1209 14:12:38.185417 5173 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:12:38 crc kubenswrapper[5173]: I1209 14:12:38.632866 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:39 crc kubenswrapper[5173]: I1209 14:12:39.631841 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:39 crc kubenswrapper[5173]: I1209 14:12:39.847133 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:39 crc kubenswrapper[5173]: I1209 14:12:39.849065 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:39 crc kubenswrapper[5173]: I1209 14:12:39.849120 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:39 crc kubenswrapper[5173]: I1209 14:12:39.849132 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:39 crc kubenswrapper[5173]: I1209 14:12:39.849162 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:12:39 crc kubenswrapper[5173]: E1209 14:12:39.861403 5173 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:12:40 crc kubenswrapper[5173]: E1209 14:12:40.610477 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 09 14:12:40 crc kubenswrapper[5173]: I1209 14:12:40.632562 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:41 crc kubenswrapper[5173]: E1209 14:12:41.303916 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:12:41 crc kubenswrapper[5173]: I1209 14:12:41.632097 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:42 crc kubenswrapper[5173]: I1209 14:12:42.631578 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:43 crc kubenswrapper[5173]: E1209 14:12:43.586484 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 09 14:12:43 crc kubenswrapper[5173]: I1209 14:12:43.633926 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:44 crc kubenswrapper[5173]: I1209 14:12:44.629854 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:45 crc kubenswrapper[5173]: I1209 14:12:45.630484 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:46 crc kubenswrapper[5173]: I1209 14:12:46.631941 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:46 crc kubenswrapper[5173]: I1209 14:12:46.861912 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:46 crc kubenswrapper[5173]: I1209 14:12:46.863312 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:46 crc kubenswrapper[5173]: I1209 14:12:46.863387 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:46 crc kubenswrapper[5173]: I1209 14:12:46.863405 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:46 crc kubenswrapper[5173]: I1209 14:12:46.863437 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:12:46 crc kubenswrapper[5173]: I1209 14:12:46.869824 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:46 crc kubenswrapper[5173]: I1209 14:12:46.870976 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:46 crc kubenswrapper[5173]: I1209 14:12:46.871030 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:46 crc kubenswrapper[5173]: I1209 14:12:46.871047 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:46 crc kubenswrapper[5173]: E1209 14:12:46.871659 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:46 crc kubenswrapper[5173]: E1209 14:12:46.871939 5173 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:12:46 crc kubenswrapper[5173]: I1209 14:12:46.872095 5173 scope.go:117] "RemoveContainer" containerID="3fa5d7f433b32fd022a0c84fd73dc15b76ba4376f38a44b65d1658496e9e91a6" Dec 09 14:12:46 crc kubenswrapper[5173]: E1209 14:12:46.879595 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f9177ddc01562\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177ddc01562 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:02.960012642 +0000 UTC m=+5.885294889,LastTimestamp:2025-12-09 14:12:46.87384969 +0000 UTC m=+49.799131937,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:47 crc kubenswrapper[5173]: E1209 14:12:47.092523 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f9177eb44b928\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177eb44b928 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:03.186809128 +0000 UTC m=+6.112091375,LastTimestamp:2025-12-09 14:12:47.085533009 +0000 UTC m=+50.010815266,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:47 crc kubenswrapper[5173]: I1209 14:12:47.104860 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 09 14:12:47 crc kubenswrapper[5173]: E1209 14:12:47.105307 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f9177ec40327e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f9177ec40327e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:03.203289726 +0000 UTC m=+6.128571973,LastTimestamp:2025-12-09 14:12:47.097570513 +0000 UTC m=+50.022852760,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:47 crc kubenswrapper[5173]: I1209 14:12:47.107506 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"919dc9b36ab47a3c9ce80c1352d5821dac400391e2ac9c5da3602f0a1da66758"} Dec 09 14:12:47 crc kubenswrapper[5173]: I1209 14:12:47.107776 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:47 crc kubenswrapper[5173]: I1209 14:12:47.108555 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:47 crc kubenswrapper[5173]: I1209 14:12:47.108588 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:47 crc kubenswrapper[5173]: I1209 14:12:47.108597 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:47 crc kubenswrapper[5173]: E1209 14:12:47.108932 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:47 crc kubenswrapper[5173]: I1209 14:12:47.630561 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:48 crc kubenswrapper[5173]: E1209 14:12:48.185807 5173 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:12:48 crc kubenswrapper[5173]: E1209 14:12:48.312884 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:12:48 crc kubenswrapper[5173]: I1209 14:12:48.630598 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:48 crc kubenswrapper[5173]: E1209 14:12:48.984835 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 09 14:12:49 crc kubenswrapper[5173]: I1209 14:12:49.113154 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 09 14:12:49 crc kubenswrapper[5173]: I1209 14:12:49.113658 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 09 14:12:49 crc kubenswrapper[5173]: I1209 14:12:49.115066 5173 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="919dc9b36ab47a3c9ce80c1352d5821dac400391e2ac9c5da3602f0a1da66758" exitCode=255 Dec 09 14:12:49 crc kubenswrapper[5173]: I1209 14:12:49.115112 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"919dc9b36ab47a3c9ce80c1352d5821dac400391e2ac9c5da3602f0a1da66758"} Dec 09 14:12:49 crc kubenswrapper[5173]: I1209 14:12:49.115152 5173 scope.go:117] "RemoveContainer" containerID="3fa5d7f433b32fd022a0c84fd73dc15b76ba4376f38a44b65d1658496e9e91a6" Dec 09 14:12:49 crc kubenswrapper[5173]: I1209 14:12:49.115335 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:49 crc kubenswrapper[5173]: I1209 14:12:49.116191 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:49 crc kubenswrapper[5173]: I1209 14:12:49.116223 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:49 crc kubenswrapper[5173]: I1209 14:12:49.116234 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:49 crc kubenswrapper[5173]: E1209 14:12:49.116992 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:49 crc kubenswrapper[5173]: I1209 14:12:49.117264 5173 scope.go:117] "RemoveContainer" containerID="919dc9b36ab47a3c9ce80c1352d5821dac400391e2ac9c5da3602f0a1da66758" Dec 09 14:12:49 crc kubenswrapper[5173]: E1209 14:12:49.117508 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:12:49 crc kubenswrapper[5173]: E1209 14:12:49.119086 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f917c4dff2fea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f917c4dff2fea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:22.023065578 +0000 UTC m=+24.948347825,LastTimestamp:2025-12-09 14:12:49.11747063 +0000 UTC m=+52.042752877,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:49 crc kubenswrapper[5173]: I1209 14:12:49.631916 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:50 crc kubenswrapper[5173]: I1209 14:12:50.118968 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 09 14:12:50 crc kubenswrapper[5173]: I1209 14:12:50.627384 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:51 crc kubenswrapper[5173]: I1209 14:12:51.629902 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:52 crc kubenswrapper[5173]: I1209 14:12:52.629681 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:52 crc kubenswrapper[5173]: I1209 14:12:52.698159 5173 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:12:52 crc kubenswrapper[5173]: I1209 14:12:52.698529 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:52 crc kubenswrapper[5173]: I1209 14:12:52.699926 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:52 crc kubenswrapper[5173]: I1209 14:12:52.699977 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:52 crc kubenswrapper[5173]: I1209 14:12:52.699991 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:52 crc kubenswrapper[5173]: E1209 14:12:52.700385 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:52 crc kubenswrapper[5173]: I1209 14:12:52.700721 5173 scope.go:117] "RemoveContainer" containerID="919dc9b36ab47a3c9ce80c1352d5821dac400391e2ac9c5da3602f0a1da66758" Dec 09 14:12:52 crc kubenswrapper[5173]: E1209 14:12:52.701004 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:12:52 crc kubenswrapper[5173]: E1209 14:12:52.708526 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f917c4dff2fea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f917c4dff2fea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:22.023065578 +0000 UTC m=+24.948347825,LastTimestamp:2025-12-09 14:12:52.700963652 +0000 UTC m=+55.626245899,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:52 crc kubenswrapper[5173]: E1209 14:12:52.738341 5173 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 09 14:12:53 crc kubenswrapper[5173]: I1209 14:12:53.633645 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:53 crc kubenswrapper[5173]: I1209 14:12:53.872947 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:53 crc kubenswrapper[5173]: I1209 14:12:53.874285 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:53 crc kubenswrapper[5173]: I1209 14:12:53.874436 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:53 crc kubenswrapper[5173]: I1209 14:12:53.874470 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:53 crc kubenswrapper[5173]: I1209 14:12:53.874518 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:12:53 crc kubenswrapper[5173]: E1209 14:12:53.888993 5173 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:12:54 crc kubenswrapper[5173]: I1209 14:12:54.629744 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:55 crc kubenswrapper[5173]: I1209 14:12:55.057766 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:12:55 crc kubenswrapper[5173]: I1209 14:12:55.058623 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:55 crc kubenswrapper[5173]: I1209 14:12:55.059690 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:55 crc kubenswrapper[5173]: I1209 14:12:55.059803 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:55 crc kubenswrapper[5173]: I1209 14:12:55.059833 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:55 crc kubenswrapper[5173]: E1209 14:12:55.060548 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:55 crc kubenswrapper[5173]: E1209 14:12:55.319302 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:12:55 crc kubenswrapper[5173]: I1209 14:12:55.633444 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:56 crc kubenswrapper[5173]: I1209 14:12:56.632215 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:57 crc kubenswrapper[5173]: I1209 14:12:57.108634 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:12:57 crc kubenswrapper[5173]: I1209 14:12:57.109078 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:12:57 crc kubenswrapper[5173]: I1209 14:12:57.110423 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:12:57 crc kubenswrapper[5173]: I1209 14:12:57.110610 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:12:57 crc kubenswrapper[5173]: I1209 14:12:57.110936 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:12:57 crc kubenswrapper[5173]: E1209 14:12:57.111509 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:12:57 crc kubenswrapper[5173]: I1209 14:12:57.111925 5173 scope.go:117] "RemoveContainer" containerID="919dc9b36ab47a3c9ce80c1352d5821dac400391e2ac9c5da3602f0a1da66758" Dec 09 14:12:57 crc kubenswrapper[5173]: E1209 14:12:57.112246 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:12:57 crc kubenswrapper[5173]: E1209 14:12:57.120467 5173 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f917c4dff2fea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f917c4dff2fea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:12:22.023065578 +0000 UTC m=+24.948347825,LastTimestamp:2025-12-09 14:12:57.112192146 +0000 UTC m=+60.037474403,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:12:57 crc kubenswrapper[5173]: I1209 14:12:57.634826 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:58 crc kubenswrapper[5173]: E1209 14:12:58.187310 5173 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:12:58 crc kubenswrapper[5173]: I1209 14:12:58.629332 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:12:59 crc kubenswrapper[5173]: I1209 14:12:59.632433 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:13:00 crc kubenswrapper[5173]: I1209 14:13:00.629189 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:13:00 crc kubenswrapper[5173]: I1209 14:13:00.889961 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:13:00 crc kubenswrapper[5173]: I1209 14:13:00.891023 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:00 crc kubenswrapper[5173]: I1209 14:13:00.891068 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:00 crc kubenswrapper[5173]: I1209 14:13:00.891081 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:00 crc kubenswrapper[5173]: I1209 14:13:00.891108 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:13:00 crc kubenswrapper[5173]: E1209 14:13:00.901133 5173 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 09 14:13:01 crc kubenswrapper[5173]: I1209 14:13:01.629526 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:13:02 crc kubenswrapper[5173]: E1209 14:13:02.325626 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 09 14:13:02 crc kubenswrapper[5173]: I1209 14:13:02.629273 5173 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 09 14:13:02 crc kubenswrapper[5173]: I1209 14:13:02.768955 5173 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-lf7x8" Dec 09 14:13:02 crc kubenswrapper[5173]: I1209 14:13:02.779002 5173 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-lf7x8" Dec 09 14:13:02 crc kubenswrapper[5173]: I1209 14:13:02.874330 5173 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 09 14:13:03 crc kubenswrapper[5173]: I1209 14:13:03.468668 5173 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 09 14:13:03 crc kubenswrapper[5173]: I1209 14:13:03.780775 5173 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-08 14:08:02 +0000 UTC" deadline="2026-01-05 13:39:27.587536355 +0000 UTC" Dec 09 14:13:03 crc kubenswrapper[5173]: I1209 14:13:03.780832 5173 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="647h26m23.806708786s" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.901401 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.902363 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.902416 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.902429 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.902513 5173 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.916832 5173 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.917091 5173 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 09 14:13:07 crc kubenswrapper[5173]: E1209 14:13:07.917109 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.919998 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.920052 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.920062 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.920077 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.920089 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:07Z","lastTransitionTime":"2025-12-09T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:07 crc kubenswrapper[5173]: E1209 14:13:07.933861 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.941203 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.941457 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.941567 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.941671 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.941759 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:07Z","lastTransitionTime":"2025-12-09T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:07 crc kubenswrapper[5173]: E1209 14:13:07.952224 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.959587 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.959631 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.959641 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.959657 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.959667 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:07Z","lastTransitionTime":"2025-12-09T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:07 crc kubenswrapper[5173]: E1209 14:13:07.970038 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.977827 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.977873 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.977886 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.977902 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:07 crc kubenswrapper[5173]: I1209 14:13:07.977913 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:07Z","lastTransitionTime":"2025-12-09T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:07 crc kubenswrapper[5173]: E1209 14:13:07.994345 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:07 crc kubenswrapper[5173]: E1209 14:13:07.994545 5173 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:13:07 crc kubenswrapper[5173]: E1209 14:13:07.994582 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:08 crc kubenswrapper[5173]: E1209 14:13:08.095246 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:08 crc kubenswrapper[5173]: E1209 14:13:08.188438 5173 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:13:08 crc kubenswrapper[5173]: E1209 14:13:08.195386 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:08 crc kubenswrapper[5173]: E1209 14:13:08.295492 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:08 crc kubenswrapper[5173]: E1209 14:13:08.396041 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:08 crc kubenswrapper[5173]: E1209 14:13:08.496434 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:08 crc kubenswrapper[5173]: E1209 14:13:08.597499 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:08 crc kubenswrapper[5173]: E1209 14:13:08.698189 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:08 crc kubenswrapper[5173]: E1209 14:13:08.798690 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:08 crc kubenswrapper[5173]: E1209 14:13:08.899840 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:09 crc kubenswrapper[5173]: E1209 14:13:09.000954 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:09 crc kubenswrapper[5173]: E1209 14:13:09.101662 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:09 crc kubenswrapper[5173]: E1209 14:13:09.201889 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:09 crc kubenswrapper[5173]: E1209 14:13:09.303129 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:09 crc kubenswrapper[5173]: E1209 14:13:09.403943 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:09 crc kubenswrapper[5173]: E1209 14:13:09.505226 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:09 crc kubenswrapper[5173]: E1209 14:13:09.606541 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:09 crc kubenswrapper[5173]: E1209 14:13:09.707555 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:09 crc kubenswrapper[5173]: E1209 14:13:09.808621 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:09 crc kubenswrapper[5173]: I1209 14:13:09.870142 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:13:09 crc kubenswrapper[5173]: I1209 14:13:09.870862 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:09 crc kubenswrapper[5173]: I1209 14:13:09.870894 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:09 crc kubenswrapper[5173]: I1209 14:13:09.870905 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:09 crc kubenswrapper[5173]: E1209 14:13:09.871369 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:13:09 crc kubenswrapper[5173]: I1209 14:13:09.871579 5173 scope.go:117] "RemoveContainer" containerID="919dc9b36ab47a3c9ce80c1352d5821dac400391e2ac9c5da3602f0a1da66758" Dec 09 14:13:09 crc kubenswrapper[5173]: E1209 14:13:09.909443 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:10 crc kubenswrapper[5173]: E1209 14:13:10.009595 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:10 crc kubenswrapper[5173]: E1209 14:13:10.110263 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:10 crc kubenswrapper[5173]: I1209 14:13:10.188097 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 09 14:13:10 crc kubenswrapper[5173]: I1209 14:13:10.190212 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d"} Dec 09 14:13:10 crc kubenswrapper[5173]: I1209 14:13:10.190497 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:13:10 crc kubenswrapper[5173]: I1209 14:13:10.191251 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:10 crc kubenswrapper[5173]: I1209 14:13:10.191301 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:10 crc kubenswrapper[5173]: I1209 14:13:10.191317 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:10 crc kubenswrapper[5173]: E1209 14:13:10.191875 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:13:10 crc kubenswrapper[5173]: E1209 14:13:10.211200 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:10 crc kubenswrapper[5173]: E1209 14:13:10.311483 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:10 crc kubenswrapper[5173]: E1209 14:13:10.411962 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:10 crc kubenswrapper[5173]: E1209 14:13:10.512127 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:10 crc kubenswrapper[5173]: E1209 14:13:10.612242 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:10 crc kubenswrapper[5173]: E1209 14:13:10.712773 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:10 crc kubenswrapper[5173]: E1209 14:13:10.813625 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:10 crc kubenswrapper[5173]: E1209 14:13:10.914595 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:11 crc kubenswrapper[5173]: E1209 14:13:11.014786 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:11 crc kubenswrapper[5173]: E1209 14:13:11.115719 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:11 crc kubenswrapper[5173]: E1209 14:13:11.216703 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:11 crc kubenswrapper[5173]: E1209 14:13:11.317389 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:11 crc kubenswrapper[5173]: E1209 14:13:11.418793 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:11 crc kubenswrapper[5173]: E1209 14:13:11.519212 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:11 crc kubenswrapper[5173]: E1209 14:13:11.619474 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:11 crc kubenswrapper[5173]: E1209 14:13:11.720379 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:11 crc kubenswrapper[5173]: E1209 14:13:11.821119 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:11 crc kubenswrapper[5173]: E1209 14:13:11.922120 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.022452 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.122837 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.196009 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.196720 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.198077 5173 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d" exitCode=255 Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.198158 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d"} Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.198237 5173 scope.go:117] "RemoveContainer" containerID="919dc9b36ab47a3c9ce80c1352d5821dac400391e2ac9c5da3602f0a1da66758" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.198558 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.199554 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.199588 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.199601 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.199998 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.200269 5173 scope.go:117] "RemoveContainer" containerID="c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.200510 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.223769 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.324831 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.425505 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.526577 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.627423 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.698551 5173 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.728347 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.829084 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.869861 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.870737 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.870905 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:12 crc kubenswrapper[5173]: I1209 14:13:12.871015 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.871456 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:13:12 crc kubenswrapper[5173]: E1209 14:13:12.929984 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:13 crc kubenswrapper[5173]: E1209 14:13:13.030427 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:13 crc kubenswrapper[5173]: E1209 14:13:13.131410 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:13 crc kubenswrapper[5173]: I1209 14:13:13.202016 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 09 14:13:13 crc kubenswrapper[5173]: I1209 14:13:13.203894 5173 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 09 14:13:13 crc kubenswrapper[5173]: I1209 14:13:13.204662 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:13 crc kubenswrapper[5173]: I1209 14:13:13.204779 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:13 crc kubenswrapper[5173]: I1209 14:13:13.204875 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:13 crc kubenswrapper[5173]: E1209 14:13:13.205390 5173 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 09 14:13:13 crc kubenswrapper[5173]: I1209 14:13:13.205729 5173 scope.go:117] "RemoveContainer" containerID="c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d" Dec 09 14:13:13 crc kubenswrapper[5173]: E1209 14:13:13.206034 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:13:13 crc kubenswrapper[5173]: E1209 14:13:13.232146 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:13 crc kubenswrapper[5173]: E1209 14:13:13.332676 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:13 crc kubenswrapper[5173]: E1209 14:13:13.433522 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:13 crc kubenswrapper[5173]: E1209 14:13:13.534325 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:13 crc kubenswrapper[5173]: E1209 14:13:13.635133 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:13 crc kubenswrapper[5173]: E1209 14:13:13.736259 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:13 crc kubenswrapper[5173]: E1209 14:13:13.837471 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:13 crc kubenswrapper[5173]: E1209 14:13:13.938573 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:14 crc kubenswrapper[5173]: E1209 14:13:14.039213 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:14 crc kubenswrapper[5173]: E1209 14:13:14.140686 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:14 crc kubenswrapper[5173]: E1209 14:13:14.241741 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:14 crc kubenswrapper[5173]: E1209 14:13:14.342800 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:14 crc kubenswrapper[5173]: E1209 14:13:14.443919 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:14 crc kubenswrapper[5173]: E1209 14:13:14.544677 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:14 crc kubenswrapper[5173]: E1209 14:13:14.645208 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:14 crc kubenswrapper[5173]: E1209 14:13:14.745795 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:14 crc kubenswrapper[5173]: E1209 14:13:14.846922 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:14 crc kubenswrapper[5173]: E1209 14:13:14.947417 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:15 crc kubenswrapper[5173]: E1209 14:13:15.048050 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:15 crc kubenswrapper[5173]: E1209 14:13:15.149179 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:15 crc kubenswrapper[5173]: E1209 14:13:15.249405 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:15 crc kubenswrapper[5173]: E1209 14:13:15.350254 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:15 crc kubenswrapper[5173]: E1209 14:13:15.451245 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:15 crc kubenswrapper[5173]: E1209 14:13:15.551454 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:15 crc kubenswrapper[5173]: E1209 14:13:15.652032 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:15 crc kubenswrapper[5173]: E1209 14:13:15.752444 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:15 crc kubenswrapper[5173]: E1209 14:13:15.853621 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:15 crc kubenswrapper[5173]: E1209 14:13:15.954127 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:16 crc kubenswrapper[5173]: E1209 14:13:16.054329 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:16 crc kubenswrapper[5173]: E1209 14:13:16.154922 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:16 crc kubenswrapper[5173]: E1209 14:13:16.255649 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:16 crc kubenswrapper[5173]: E1209 14:13:16.356511 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:16 crc kubenswrapper[5173]: E1209 14:13:16.456782 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:16 crc kubenswrapper[5173]: E1209 14:13:16.557256 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:16 crc kubenswrapper[5173]: I1209 14:13:16.625711 5173 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:13:16 crc kubenswrapper[5173]: E1209 14:13:16.657841 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:16 crc kubenswrapper[5173]: E1209 14:13:16.758294 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:16 crc kubenswrapper[5173]: E1209 14:13:16.859037 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:16 crc kubenswrapper[5173]: E1209 14:13:16.960100 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:17 crc kubenswrapper[5173]: E1209 14:13:17.060426 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:17 crc kubenswrapper[5173]: E1209 14:13:17.161155 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:17 crc kubenswrapper[5173]: E1209 14:13:17.262281 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:17 crc kubenswrapper[5173]: E1209 14:13:17.363442 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:17 crc kubenswrapper[5173]: E1209 14:13:17.464114 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:17 crc kubenswrapper[5173]: E1209 14:13:17.564656 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:17 crc kubenswrapper[5173]: E1209 14:13:17.665466 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:17 crc kubenswrapper[5173]: E1209 14:13:17.766425 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:17 crc kubenswrapper[5173]: E1209 14:13:17.866957 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:17 crc kubenswrapper[5173]: E1209 14:13:17.967100 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.067534 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.168901 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.188987 5173 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.269485 5173 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.296915 5173 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.353198 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.353268 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.353284 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.353312 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.353326 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:18Z","lastTransitionTime":"2025-12-09T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.359604 5173 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.365076 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.369076 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.369119 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.369131 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.369148 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.369162 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:18Z","lastTransitionTime":"2025-12-09T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.378547 5173 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.380591 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.384178 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.384232 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.384253 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.384275 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.384291 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:18Z","lastTransitionTime":"2025-12-09T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.394736 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.397890 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.397942 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.397960 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.397982 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.397998 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:18Z","lastTransitionTime":"2025-12-09T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.411268 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.414258 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.414285 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.414296 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.414312 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.414327 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:18Z","lastTransitionTime":"2025-12-09T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.422172 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.422454 5173 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.423580 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.423680 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.423695 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.423713 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.423726 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:18Z","lastTransitionTime":"2025-12-09T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.477129 5173 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.526012 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.526061 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.526074 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.526091 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.526103 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:18Z","lastTransitionTime":"2025-12-09T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.579225 5173 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.628068 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.628136 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.628162 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.628183 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.628199 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:18Z","lastTransitionTime":"2025-12-09T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.662639 5173 apiserver.go:52] "Watching apiserver" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.670070 5173 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.670649 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mw8tp","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-target-fhkjl","openshift-ovn-kubernetes/ovnkube-node-4hj6p","openshift-etcd/etcd-crc","openshift-multus/multus-d24z7","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf","openshift-image-registry/node-ca-trx55","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/network-metrics-daemon-lbnx5","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/iptables-alerter-5jnd7","openshift-dns/node-resolver-94z8j","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/machine-config-daemon-pxfmg"] Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.671697 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.672798 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.673089 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.673254 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.673501 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.673338 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.674384 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.674588 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.674394 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.674843 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.675035 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.675585 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.675785 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.675807 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.676015 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.676267 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.689636 5173 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.690052 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.702050 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.715085 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.724743 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.730167 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.730226 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.730239 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.730257 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.730270 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:18Z","lastTransitionTime":"2025-12-09T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.735794 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.742916 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.743125 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.744823 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.746216 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.746429 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.746499 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.746528 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.746574 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.746779 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.747798 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.748817 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.760344 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49bec440-391d-48d9-9bc6-a14f40787067\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4hj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.769211 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.769617 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.771405 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.771735 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.772387 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.772612 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.772736 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.779189 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.783774 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a8dd347-8a1b-4551-a318-abe7c12df817-proxy-tls\") pod \"machine-config-daemon-pxfmg\" (UID: \"8a8dd347-8a1b-4551-a318-abe7c12df817\") " pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.783821 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-run-netns\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.783854 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.783879 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784222 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784271 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-slash\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784290 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-var-lib-openvswitch\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784317 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-kubelet\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784338 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49bec440-391d-48d9-9bc6-a14f40787067-ovn-node-metrics-cert\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784392 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-cni-netd\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784413 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-ovnkube-config\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784430 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784454 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784478 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-node-log\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784512 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784533 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-run-ovn-kubernetes\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784550 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784573 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784601 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784623 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-systemd\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784640 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-env-overrides\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784660 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-openvswitch\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784681 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784701 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tzp5\" (UniqueName: \"kubernetes.io/projected/8a8dd347-8a1b-4551-a318-abe7c12df817-kube-api-access-6tzp5\") pod \"machine-config-daemon-pxfmg\" (UID: \"8a8dd347-8a1b-4551-a318-abe7c12df817\") " pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784721 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784739 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-log-socket\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784760 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784775 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-ovn\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784794 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a8dd347-8a1b-4551-a318-abe7c12df817-mcd-auth-proxy-config\") pod \"machine-config-daemon-pxfmg\" (UID: \"8a8dd347-8a1b-4551-a318-abe7c12df817\") " pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784814 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784834 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784853 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8a8dd347-8a1b-4551-a318-abe7c12df817-rootfs\") pod \"machine-config-daemon-pxfmg\" (UID: \"8a8dd347-8a1b-4551-a318-abe7c12df817\") " pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784870 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p5kj\" (UniqueName: \"kubernetes.io/projected/49bec440-391d-48d9-9bc6-a14f40787067-kube-api-access-5p5kj\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.784890 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.784971 5173 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.785040 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:19.285022216 +0000 UTC m=+82.210304463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.785086 5173 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.785032 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-systemd-units\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.785183 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:19.285166891 +0000 UTC m=+82.210449128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.785208 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-etc-openvswitch\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.785230 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-cni-bin\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.785435 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-ovnkube-script-lib\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.787296 5173 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.787770 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.788131 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.788533 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.793399 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.798941 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.801390 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.801824 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.801855 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.801880 5173 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.801956 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:19.301938103 +0000 UTC m=+82.227220360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.802277 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.802522 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.802642 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.802659 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.802529 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.802796 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.803255 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.804878 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-94z8j" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.805970 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.806278 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.806515 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.808453 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.808899 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.809073 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.809641 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.809662 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.809672 5173 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.809727 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:19.309712935 +0000 UTC m=+82.234995182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.810239 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.810680 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.810766 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.812700 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.816171 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.823934 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.828979 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.829094 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.832184 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.832214 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.832412 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.832450 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.832468 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:18Z","lastTransitionTime":"2025-12-09T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.833401 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.837284 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.837315 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.837440 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-trx55" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.838761 5173 scope.go:117] "RemoveContainer" containerID="c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.838966 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.839816 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.841203 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.841497 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.841671 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.841836 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.841973 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.843713 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.843680 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-d24z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80ae74e-7470-4168-bdc1-454fa2137d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7glnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d24z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.850004 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-94z8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh9pw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94z8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.859837 5173 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.860322 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.869083 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.879158 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.886768 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.886824 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.886856 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.886890 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.886915 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.886936 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.886957 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.886982 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887004 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887022 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887045 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887066 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887091 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887111 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887198 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887273 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887297 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887345 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887484 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887510 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887535 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887731 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.887963 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.888032 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.888124 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.888376 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.888384 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.888437 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.888551 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.888679 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.888729 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.888758 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.888750 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.888783 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.888922 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889017 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889053 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889078 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889089 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889099 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889138 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889169 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889194 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889217 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889222 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889243 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889268 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889334 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889333 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889433 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889445 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889523 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889563 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889540 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889588 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889604 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889620 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889632 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889644 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889638 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.889858 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890513 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890619 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890631 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890547 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890673 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890694 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890712 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890730 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890747 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890753 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890765 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890776 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890902 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890921 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890938 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890946 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890015 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.890955 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891010 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891034 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891056 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891071 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891088 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891104 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891124 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891141 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891157 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891171 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891187 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891204 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891223 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891255 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891270 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891282 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891314 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891338 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891379 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891399 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891415 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891430 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891446 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891463 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891478 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891498 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891516 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891533 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891547 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891563 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891582 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891602 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891618 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.892486 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893119 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893155 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893177 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893196 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893213 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893390 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893412 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893667 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894595 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894630 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894706 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894734 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894756 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894779 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894802 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894823 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891395 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894854 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891565 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894874 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891835 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.891991 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894898 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894917 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894939 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894957 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894977 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894997 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895017 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895035 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895054 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895072 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895094 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895116 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895137 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895160 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895181 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895201 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895225 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895245 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895266 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895286 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895304 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895325 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895345 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895386 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895405 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895425 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895451 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895471 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895491 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895511 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895533 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895552 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895575 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895593 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895651 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895674 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895695 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895715 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895734 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895756 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895776 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895795 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895814 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895836 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895857 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895895 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895916 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895939 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895962 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895984 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896004 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896027 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896048 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896071 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896091 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896111 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896133 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896154 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896175 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896198 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896221 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896241 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896261 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896282 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896304 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896326 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896360 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896381 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896404 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896450 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896471 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896494 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896516 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896538 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896559 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896582 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896604 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896626 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896649 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896670 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896705 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896779 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896802 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896822 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896843 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896865 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896889 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896913 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896935 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896957 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896990 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897013 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897034 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897057 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897081 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897105 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897128 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897152 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897174 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897197 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897226 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897250 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897283 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897635 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897667 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897689 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897708 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897730 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897751 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897771 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897790 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897810 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897829 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897849 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897870 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897889 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897909 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897929 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897950 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897969 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897990 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898013 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898034 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898055 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898076 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898167 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9716f570-4790-4075-a3c3-42114eb7728e-serviceca\") pod \"node-ca-trx55\" (UID: \"9716f570-4790-4075-a3c3-42114eb7728e\") " pod="openshift-image-registry/node-ca-trx55" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898195 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qdhf\" (UniqueName: \"kubernetes.io/projected/9716f570-4790-4075-a3c3-42114eb7728e-kube-api-access-2qdhf\") pod \"node-ca-trx55\" (UID: \"9716f570-4790-4075-a3c3-42114eb7728e\") " pod="openshift-image-registry/node-ca-trx55" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898221 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a80ae74e-7470-4168-bdc1-454fa2137d7a-cni-binary-copy\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898252 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898294 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-node-log\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898314 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e370197d-9d3c-48ce-8973-ceed80782226-os-release\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898331 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-multus-cni-dir\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898386 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-run-ovn-kubernetes\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898408 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-run-netns\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898465 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-hostroot\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898483 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07ddf926-e4f7-4486-920c-8d83fca5b4da-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-srjbf\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898526 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-os-release\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898554 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-var-lib-cni-bin\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898772 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e370197d-9d3c-48ce-8973-ceed80782226-system-cni-dir\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898806 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e370197d-9d3c-48ce-8973-ceed80782226-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898837 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-etc-kubernetes\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898921 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-systemd\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898951 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-env-overrides\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.898994 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-openvswitch\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899022 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-system-cni-dir\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899049 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-cnibin\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899078 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-multus-socket-dir-parent\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899167 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6tzp5\" (UniqueName: \"kubernetes.io/projected/8a8dd347-8a1b-4551-a318-abe7c12df817-kube-api-access-6tzp5\") pod \"machine-config-daemon-pxfmg\" (UID: \"8a8dd347-8a1b-4551-a318-abe7c12df817\") " pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899200 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e370197d-9d3c-48ce-8973-ceed80782226-cni-binary-copy\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899239 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-log-socket\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899266 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-var-lib-kubelet\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899296 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07ddf926-e4f7-4486-920c-8d83fca5b4da-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-srjbf\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899372 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-ovn\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899407 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a8dd347-8a1b-4551-a318-abe7c12df817-mcd-auth-proxy-config\") pod \"machine-config-daemon-pxfmg\" (UID: \"8a8dd347-8a1b-4551-a318-abe7c12df817\") " pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899433 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e370197d-9d3c-48ce-8973-ceed80782226-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899475 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899503 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8a8dd347-8a1b-4551-a318-abe7c12df817-rootfs\") pod \"machine-config-daemon-pxfmg\" (UID: \"8a8dd347-8a1b-4551-a318-abe7c12df817\") " pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899528 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5p5kj\" (UniqueName: \"kubernetes.io/projected/49bec440-391d-48d9-9bc6-a14f40787067-kube-api-access-5p5kj\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899546 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-var-lib-cni-multus\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899563 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a80ae74e-7470-4168-bdc1-454fa2137d7a-multus-daemon-config\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899595 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-systemd-units\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899615 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-etc-openvswitch\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899634 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-cni-bin\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899654 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-ovnkube-script-lib\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899728 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a8dd347-8a1b-4551-a318-abe7c12df817-proxy-tls\") pod \"machine-config-daemon-pxfmg\" (UID: \"8a8dd347-8a1b-4551-a318-abe7c12df817\") " pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899752 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7glnp\" (UniqueName: \"kubernetes.io/projected/a80ae74e-7470-4168-bdc1-454fa2137d7a-kube-api-access-7glnp\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899773 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37-hosts-file\") pod \"node-resolver-94z8j\" (UID: \"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\") " pod="openshift-dns/node-resolver-94z8j" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899794 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-run-netns\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899818 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e370197d-9d3c-48ce-8973-ceed80782226-cnibin\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899836 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-run-k8s-cni-cncf-io\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899856 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37-tmp-dir\") pod \"node-resolver-94z8j\" (UID: \"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\") " pod="openshift-dns/node-resolver-94z8j" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899875 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07ddf926-e4f7-4486-920c-8d83fca5b4da-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-srjbf\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.899897 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.900091 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdfcm\" (UniqueName: \"kubernetes.io/projected/07ddf926-e4f7-4486-920c-8d83fca5b4da-kube-api-access-mdfcm\") pod \"ovnkube-control-plane-57b78d8988-srjbf\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.900118 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-slash\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.900139 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-var-lib-openvswitch\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.900161 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5t48\" (UniqueName: \"kubernetes.io/projected/e370197d-9d3c-48ce-8973-ceed80782226-kube-api-access-q5t48\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.900179 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-multus-conf-dir\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.900217 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-run-multus-certs\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901011 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh9pw\" (UniqueName: \"kubernetes.io/projected/a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37-kube-api-access-vh9pw\") pod \"node-resolver-94z8j\" (UID: \"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\") " pod="openshift-dns/node-resolver-94z8j" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901043 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s95xm\" (UniqueName: \"kubernetes.io/projected/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-kube-api-access-s95xm\") pod \"network-metrics-daemon-lbnx5\" (UID: \"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\") " pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901067 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-kubelet\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901088 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49bec440-391d-48d9-9bc6-a14f40787067-ovn-node-metrics-cert\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901109 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/e370197d-9d3c-48ce-8973-ceed80782226-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901128 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs\") pod \"network-metrics-daemon-lbnx5\" (UID: \"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\") " pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901154 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-cni-netd\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901176 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-ovnkube-config\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901194 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9716f570-4790-4075-a3c3-42114eb7728e-host\") pod \"node-ca-trx55\" (UID: \"9716f570-4790-4075-a3c3-42114eb7728e\") " pod="openshift-image-registry/node-ca-trx55" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901299 5173 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901312 5173 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901323 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901335 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901365 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901381 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901397 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901411 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901425 5173 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901436 5173 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901447 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901457 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901467 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901478 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901489 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901498 5173 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901510 5173 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901520 5173 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901530 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901541 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901552 5173 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901562 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901574 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901584 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901595 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901605 5173 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901615 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901626 5173 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901637 5173 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901648 5173 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901657 5173 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901668 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901680 5173 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901690 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901701 5173 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.904382 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e370197d-9d3c-48ce-8973-ceed80782226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mw8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.892030 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.892052 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.892099 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.892480 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.892503 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.892594 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.892737 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.892766 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.892818 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.892929 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893087 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893099 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893114 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893591 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893826 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.893929 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894173 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894189 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.894262 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895064 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895132 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895227 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895442 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.906378 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895662 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895901 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.895916 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896089 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896172 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896175 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896416 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896477 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896536 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896550 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896596 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896671 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.896961 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897101 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.897259 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.901280 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.902692 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.902925 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.902967 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.903409 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.903884 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.904121 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.904199 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.904456 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.904475 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.904554 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.904789 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.905085 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.905405 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.905617 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.905629 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.905721 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.905818 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: E1209 14:13:18.906042 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:19.406022411 +0000 UTC m=+82.331304658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.906768 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-node-log\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.906827 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-log-socket\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.906843 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.906864 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.906945 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.907280 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.907703 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.907898 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.908035 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.906607 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.906676 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.908034 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.906313 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.906092 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-kubelet\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.908396 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.908594 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-run-ovn-kubernetes\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.908679 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-etc-openvswitch\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.908713 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-cni-bin\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.908710 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.908644 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-systemd-units\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.908936 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.909028 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-openvswitch\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.909318 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.909478 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-run-netns\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.909523 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-cni-netd\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.909545 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-ovn\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.909558 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.909592 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.909769 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-slash\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.909798 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.909889 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.909955 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-env-overrides\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.910169 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-systemd\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.909964 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8a8dd347-8a1b-4551-a318-abe7c12df817-rootfs\") pod \"machine-config-daemon-pxfmg\" (UID: \"8a8dd347-8a1b-4551-a318-abe7c12df817\") " pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.910389 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-var-lib-openvswitch\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.911976 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a8dd347-8a1b-4551-a318-abe7c12df817-mcd-auth-proxy-config\") pod \"machine-config-daemon-pxfmg\" (UID: \"8a8dd347-8a1b-4551-a318-abe7c12df817\") " pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.912214 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-ovnkube-config\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.915274 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.915946 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49bec440-391d-48d9-9bc6-a14f40787067-ovn-node-metrics-cert\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.916030 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-ovnkube-script-lib\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.916641 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.917697 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.917782 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a8dd347-8a1b-4551-a318-abe7c12df817-proxy-tls\") pod \"machine-config-daemon-pxfmg\" (UID: \"8a8dd347-8a1b-4551-a318-abe7c12df817\") " pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.917941 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.917998 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.920508 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.920696 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.920778 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.921250 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.921542 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.922076 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.922098 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.922109 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.922813 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.922957 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.922988 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.922944 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.923189 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.923200 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.923215 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.923256 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.923379 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.923544 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.923590 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.923617 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.923638 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.923673 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.923689 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.924025 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.924045 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.924161 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.924292 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.924319 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.924367 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.924468 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.924526 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.924671 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.924743 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.924845 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.924851 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.925305 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tzp5\" (UniqueName: \"kubernetes.io/projected/8a8dd347-8a1b-4551-a318-abe7c12df817-kube-api-access-6tzp5\") pod \"machine-config-daemon-pxfmg\" (UID: \"8a8dd347-8a1b-4551-a318-abe7c12df817\") " pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.925633 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.925711 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.926545 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.926811 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.927008 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.927001 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.927683 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.927916 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.928689 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.930575 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.930896 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.931147 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49bec440-391d-48d9-9bc6-a14f40787067\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4hj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.932613 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.932925 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.933824 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.934368 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.934769 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.934899 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.935002 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.935475 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.935516 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.935290 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.935455 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.935462 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.935497 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.936548 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.936569 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.937142 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p5kj\" (UniqueName: \"kubernetes.io/projected/49bec440-391d-48d9-9bc6-a14f40787067-kube-api-access-5p5kj\") pod \"ovnkube-node-4hj6p\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.937214 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.937277 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.937453 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.937498 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.938318 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.939130 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.939189 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.939424 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.939972 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.939985 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a8dd347-8a1b-4551-a318-abe7c12df817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pxfmg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.940026 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.940021 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.940897 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.940913 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.940947 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.941414 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.942901 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.942917 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.943530 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.943897 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.944076 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.944121 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.944013 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.944308 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.944543 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.945087 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.945152 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.945214 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.945290 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.945723 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.947671 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.947700 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.947710 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.947724 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.947733 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:18Z","lastTransitionTime":"2025-12-09T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.953785 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.953959 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.954175 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.954302 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.954300 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.954811 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.955049 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.955162 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.955171 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.955230 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.955278 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.955337 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.955681 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.955921 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49bec440-391d-48d9-9bc6-a14f40787067\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4hj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.957211 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.960317 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.965503 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a8dd347-8a1b-4551-a318-abe7c12df817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pxfmg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.970649 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.982119 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9bf6317-206d-45f3-b5c6-d074a93429f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://07cb68ad1d7939b032d461e4405874dbea3c0c580d711c636b9c1bc98534ddad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://40690c3e060def2a504e5e96407e7e684a5d65be6a03e3c0c2964c5613ac3a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1501e862b689b4aabc3ad6a8aa5f8021ccdf06efb17e8f190b8a58d3a57b778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://78cdb950caf4d3cbe020e51b49b41823961f04a520144ddc0f055b1ac4015773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://54680e71891f8c4b8d3378c6a2cebfadccf93498ccbb0cf6da1b23063f9256eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.983151 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.984131 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.989846 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.989876 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:13:18 crc kubenswrapper[5173]: I1209 14:13:18.990675 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.000492 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.001873 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 09 14:13:19 crc kubenswrapper[5173]: set -o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: source /etc/kubernetes/apiserver-url.env Dec 09 14:13:19 crc kubenswrapper[5173]: else Dec 09 14:13:19 crc kubenswrapper[5173]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 09 14:13:19 crc kubenswrapper[5173]: exit 1 Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002198 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5t48\" (UniqueName: \"kubernetes.io/projected/e370197d-9d3c-48ce-8973-ceed80782226-kube-api-access-q5t48\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002239 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-multus-conf-dir\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002261 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-run-multus-certs\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002286 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vh9pw\" (UniqueName: \"kubernetes.io/projected/a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37-kube-api-access-vh9pw\") pod \"node-resolver-94z8j\" (UID: \"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\") " pod="openshift-dns/node-resolver-94z8j" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002306 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s95xm\" (UniqueName: \"kubernetes.io/projected/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-kube-api-access-s95xm\") pod \"network-metrics-daemon-lbnx5\" (UID: \"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\") " pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002322 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/e370197d-9d3c-48ce-8973-ceed80782226-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002338 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs\") pod \"network-metrics-daemon-lbnx5\" (UID: \"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\") " pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002399 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9716f570-4790-4075-a3c3-42114eb7728e-host\") pod \"node-ca-trx55\" (UID: \"9716f570-4790-4075-a3c3-42114eb7728e\") " pod="openshift-image-registry/node-ca-trx55" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002421 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9716f570-4790-4075-a3c3-42114eb7728e-serviceca\") pod \"node-ca-trx55\" (UID: \"9716f570-4790-4075-a3c3-42114eb7728e\") " pod="openshift-image-registry/node-ca-trx55" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002442 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2qdhf\" (UniqueName: \"kubernetes.io/projected/9716f570-4790-4075-a3c3-42114eb7728e-kube-api-access-2qdhf\") pod \"node-ca-trx55\" (UID: \"9716f570-4790-4075-a3c3-42114eb7728e\") " pod="openshift-image-registry/node-ca-trx55" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002460 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a80ae74e-7470-4168-bdc1-454fa2137d7a-cni-binary-copy\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002466 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-run-multus-certs\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002507 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e370197d-9d3c-48ce-8973-ceed80782226-os-release\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002574 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e370197d-9d3c-48ce-8973-ceed80782226-os-release\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002630 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-multus-cni-dir\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.002655 5173 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002701 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-run-netns\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.002705 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs podName:5d73c2ad-08e4-439f-8c5f-adb67b27ef4b nodeName:}" failed. No retries permitted until 2025-12-09 14:13:19.502689221 +0000 UTC m=+82.427971468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs") pod "network-metrics-daemon-lbnx5" (UID: "5d73c2ad-08e4-439f-8c5f-adb67b27ef4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002735 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9716f570-4790-4075-a3c3-42114eb7728e-host\") pod \"node-ca-trx55\" (UID: \"9716f570-4790-4075-a3c3-42114eb7728e\") " pod="openshift-image-registry/node-ca-trx55" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002738 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-hostroot\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002811 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07ddf926-e4f7-4486-920c-8d83fca5b4da-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-srjbf\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002840 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-os-release\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002890 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-var-lib-cni-bin\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002913 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e370197d-9d3c-48ce-8973-ceed80782226-system-cni-dir\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002958 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e370197d-9d3c-48ce-8973-ceed80782226-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002978 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-etc-kubernetes\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.003486 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07ddf926-e4f7-4486-920c-8d83fca5b4da-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-srjbf\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.003543 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.003700 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/e370197d-9d3c-48ce-8973-ceed80782226-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.003734 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e370197d-9d3c-48ce-8973-ceed80782226-system-cni-dir\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.003775 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-var-lib-cni-bin\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.004126 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e370197d-9d3c-48ce-8973-ceed80782226-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.004422 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-os-release\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.004542 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9716f570-4790-4075-a3c3-42114eb7728e-serviceca\") pod \"node-ca-trx55\" (UID: \"9716f570-4790-4075-a3c3-42114eb7728e\") " pod="openshift-image-registry/node-ca-trx55" Dec 09 14:13:19 crc kubenswrapper[5173]: W1209 14:13:19.004767 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-5a65da646e390ce203d750479ed0e7ab9c5a3104fa04a5933c9deb191509037e WatchSource:0}: Error finding container 5a65da646e390ce203d750479ed0e7ab9c5a3104fa04a5933c9deb191509037e: Status 404 returned error can't find the container with id 5a65da646e390ce203d750479ed0e7ab9c5a3104fa04a5933c9deb191509037e Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.002860 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-hostroot\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.004881 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-multus-conf-dir\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.004902 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-run-netns\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.004991 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-etc-kubernetes\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.005038 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-multus-cni-dir\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.005117 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a80ae74e-7470-4168-bdc1-454fa2137d7a-cni-binary-copy\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.005711 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-system-cni-dir\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.005764 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-cnibin\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.005789 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-multus-socket-dir-parent\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.005813 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e370197d-9d3c-48ce-8973-ceed80782226-cni-binary-copy\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.005870 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-var-lib-kubelet\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.005893 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07ddf926-e4f7-4486-920c-8d83fca5b4da-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-srjbf\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.005945 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e370197d-9d3c-48ce-8973-ceed80782226-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.005980 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-var-lib-cni-multus\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.006028 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a80ae74e-7470-4168-bdc1-454fa2137d7a-multus-daemon-config\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.006089 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7glnp\" (UniqueName: \"kubernetes.io/projected/a80ae74e-7470-4168-bdc1-454fa2137d7a-kube-api-access-7glnp\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.006117 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37-hosts-file\") pod \"node-resolver-94z8j\" (UID: \"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\") " pod="openshift-dns/node-resolver-94z8j" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.006142 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e370197d-9d3c-48ce-8973-ceed80782226-cnibin\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.006192 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-run-k8s-cni-cncf-io\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.006227 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37-tmp-dir\") pod \"node-resolver-94z8j\" (UID: \"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\") " pod="openshift-dns/node-resolver-94z8j" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.007566 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07ddf926-e4f7-4486-920c-8d83fca5b4da-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-srjbf\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.007623 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-run-k8s-cni-cncf-io\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.006464 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-cnibin\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.006524 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-var-lib-cni-multus\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.007025 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07ddf926-e4f7-4486-920c-8d83fca5b4da-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-srjbf\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.007066 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-host-var-lib-kubelet\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.007244 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e370197d-9d3c-48ce-8973-ceed80782226-cnibin\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.007404 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37-hosts-file\") pod \"node-resolver-94z8j\" (UID: \"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\") " pod="openshift-dns/node-resolver-94z8j" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.007597 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mdfcm\" (UniqueName: \"kubernetes.io/projected/07ddf926-e4f7-4486-920c-8d83fca5b4da-kube-api-access-mdfcm\") pod \"ovnkube-control-plane-57b78d8988-srjbf\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.007910 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e370197d-9d3c-48ce-8973-ceed80782226-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.006378 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-multus-socket-dir-parent\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.006426 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a80ae74e-7470-4168-bdc1-454fa2137d7a-system-cni-dir\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.008421 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e370197d-9d3c-48ce-8973-ceed80782226-cni-binary-copy\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009016 5173 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009035 5173 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009049 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009062 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009075 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009088 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009101 5173 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009113 5173 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009126 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009138 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009152 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009164 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009176 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009188 5173 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009199 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009530 5173 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009544 5173 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009557 5173 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009562 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37-tmp-dir\") pod \"node-resolver-94z8j\" (UID: \"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\") " pod="openshift-dns/node-resolver-94z8j" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009569 5173 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009722 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009745 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009760 5173 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009898 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009916 5173 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009929 5173 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.009941 5173 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010061 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010076 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010089 5173 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010101 5173 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010228 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010246 5173 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010258 5173 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010270 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010282 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010318 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010330 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010342 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010425 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010439 5173 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010451 5173 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010501 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010516 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010527 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010564 5173 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010577 5173 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010588 5173 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010600 5173 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010612 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010656 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010668 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010680 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010691 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010758 5173 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010771 5173 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010810 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010823 5173 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010834 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010845 5173 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010856 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010919 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010931 5173 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010943 5173 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010956 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010969 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010981 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.010992 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011005 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011017 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011028 5173 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011040 5173 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011053 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011065 5173 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011075 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011087 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011098 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011109 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011121 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011132 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011143 5173 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011155 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011167 5173 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011179 5173 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011190 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011202 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011212 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011224 5173 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011236 5173 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011248 5173 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011258 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011269 5173 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011281 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011293 5173 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011306 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011317 5173 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011326 5173 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011336 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011344 5173 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011377 5173 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011390 5173 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011401 5173 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011412 5173 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011423 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011434 5173 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011445 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011457 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011467 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011478 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011489 5173 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011501 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011512 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011524 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011536 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011547 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011556 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011564 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011573 5173 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011580 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011589 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011597 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011606 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011614 5173 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011624 5173 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011632 5173 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011642 5173 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011650 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011658 5173 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011666 5173 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011674 5173 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011682 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011690 5173 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011699 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011707 5173 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011715 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011723 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011732 5173 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011740 5173 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011748 5173 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011756 5173 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011764 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011783 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011793 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011801 5173 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011809 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011817 5173 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011825 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011832 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011841 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011848 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011856 5173 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011864 5173 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011871 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011879 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011887 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011898 5173 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011906 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011916 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011923 5173 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011931 5173 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011939 5173 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.011947 5173 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012223 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012234 5173 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012242 5173 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012251 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012258 5173 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012266 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012273 5173 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012281 5173 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012288 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012297 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012305 5173 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012313 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012321 5173 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012328 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012336 5173 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012345 5173 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012381 5173 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012390 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012398 5173 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012406 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012414 5173 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012422 5173 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012429 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012437 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012444 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.012455 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.013209 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07ddf926-e4f7-4486-920c-8d83fca5b4da-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-srjbf\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.013697 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a80ae74e-7470-4168-bdc1-454fa2137d7a-multus-daemon-config\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.018150 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-d24z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80ae74e-7470-4168-bdc1-454fa2137d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7glnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d24z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.018651 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh9pw\" (UniqueName: \"kubernetes.io/projected/a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37-kube-api-access-vh9pw\") pod \"node-resolver-94z8j\" (UID: \"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\") " pod="openshift-dns/node-resolver-94z8j" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.019132 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ -f "/env/_master" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: set -o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: source "/env/_master" Dec 09 14:13:19 crc kubenswrapper[5173]: set +o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 09 14:13:19 crc kubenswrapper[5173]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 09 14:13:19 crc kubenswrapper[5173]: ho_enable="--enable-hybrid-overlay" Dec 09 14:13:19 crc kubenswrapper[5173]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 09 14:13:19 crc kubenswrapper[5173]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 09 14:13:19 crc kubenswrapper[5173]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 09 14:13:19 crc kubenswrapper[5173]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:13:19 crc kubenswrapper[5173]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --webhook-host=127.0.0.1 \ Dec 09 14:13:19 crc kubenswrapper[5173]: --webhook-port=9743 \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${ho_enable} \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-interconnect \ Dec 09 14:13:19 crc kubenswrapper[5173]: --disable-approver \ Dec 09 14:13:19 crc kubenswrapper[5173]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --wait-for-kubernetes-api=200s \ Dec 09 14:13:19 crc kubenswrapper[5173]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --loglevel="${LOGLEVEL}" Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.021257 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s95xm\" (UniqueName: \"kubernetes.io/projected/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-kube-api-access-s95xm\") pod \"network-metrics-daemon-lbnx5\" (UID: \"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\") " pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.021422 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qdhf\" (UniqueName: \"kubernetes.io/projected/9716f570-4790-4075-a3c3-42114eb7728e-kube-api-access-2qdhf\") pod \"node-ca-trx55\" (UID: \"9716f570-4790-4075-a3c3-42114eb7728e\") " pod="openshift-image-registry/node-ca-trx55" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.021651 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ -f "/env/_master" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: set -o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: source "/env/_master" Dec 09 14:13:19 crc kubenswrapper[5173]: set +o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 09 14:13:19 crc kubenswrapper[5173]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:13:19 crc kubenswrapper[5173]: --disable-webhook \ Dec 09 14:13:19 crc kubenswrapper[5173]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --loglevel="${LOGLEVEL}" Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.022838 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.023715 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdfcm\" (UniqueName: \"kubernetes.io/projected/07ddf926-e4f7-4486-920c-8d83fca5b4da-kube-api-access-mdfcm\") pod \"ovnkube-control-plane-57b78d8988-srjbf\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.026770 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-94z8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh9pw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94z8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.027137 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7glnp\" (UniqueName: \"kubernetes.io/projected/a80ae74e-7470-4168-bdc1-454fa2137d7a-kube-api-access-7glnp\") pod \"multus-d24z7\" (UID: \"a80ae74e-7470-4168-bdc1-454fa2137d7a\") " pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.031133 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5t48\" (UniqueName: \"kubernetes.io/projected/e370197d-9d3c-48ce-8973-ceed80782226-kube-api-access-q5t48\") pod \"multus-additional-cni-plugins-mw8tp\" (UID: \"e370197d-9d3c-48ce-8973-ceed80782226\") " pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.034997 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9716f570-4790-4075-a3c3-42114eb7728e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qdhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.042331 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d7eab1-c137-4702-9f40-82ffc645bd99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://843a523bdd75f421c91ce69ed248e099d8a783680b394eca105778950f9d908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.049468 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.049697 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.049810 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.049874 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.049931 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:19Z","lastTransitionTime":"2025-12-09T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.049960 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.064391 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.067708 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:19 crc kubenswrapper[5173]: W1209 14:13:19.076480 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-1fba75fa22c158f14583a7ee6291d0cca135d42649cd0bdc5b8a7f43cb25501b WatchSource:0}: Error finding container 1fba75fa22c158f14583a7ee6291d0cca135d42649cd0bdc5b8a7f43cb25501b: Status 404 returned error can't find the container with id 1fba75fa22c158f14583a7ee6291d0cca135d42649cd0bdc5b8a7f43cb25501b Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.080994 5173 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.083052 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.084032 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.085957 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 09 14:13:19 crc kubenswrapper[5173]: apiVersion: v1 Dec 09 14:13:19 crc kubenswrapper[5173]: clusters: Dec 09 14:13:19 crc kubenswrapper[5173]: - cluster: Dec 09 14:13:19 crc kubenswrapper[5173]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 09 14:13:19 crc kubenswrapper[5173]: server: https://api-int.crc.testing:6443 Dec 09 14:13:19 crc kubenswrapper[5173]: name: default-cluster Dec 09 14:13:19 crc kubenswrapper[5173]: contexts: Dec 09 14:13:19 crc kubenswrapper[5173]: - context: Dec 09 14:13:19 crc kubenswrapper[5173]: cluster: default-cluster Dec 09 14:13:19 crc kubenswrapper[5173]: namespace: default Dec 09 14:13:19 crc kubenswrapper[5173]: user: default-auth Dec 09 14:13:19 crc kubenswrapper[5173]: name: default-context Dec 09 14:13:19 crc kubenswrapper[5173]: current-context: default-context Dec 09 14:13:19 crc kubenswrapper[5173]: kind: Config Dec 09 14:13:19 crc kubenswrapper[5173]: preferences: {} Dec 09 14:13:19 crc kubenswrapper[5173]: users: Dec 09 14:13:19 crc kubenswrapper[5173]: - name: default-auth Dec 09 14:13:19 crc kubenswrapper[5173]: user: Dec 09 14:13:19 crc kubenswrapper[5173]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 09 14:13:19 crc kubenswrapper[5173]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 09 14:13:19 crc kubenswrapper[5173]: EOF Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5p5kj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-4hj6p_openshift-ovn-kubernetes(49bec440-391d-48d9-9bc6-a14f40787067): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.087153 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" podUID="49bec440-391d-48d9-9bc6-a14f40787067" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.090415 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: W1209 14:13:19.095689 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a8dd347_8a1b_4551_a318_abe7c12df817.slice/crio-247860d10475d6efd2bfb1d942d9e88dd5128e08dc0fbd0c1599c03a58df673e WatchSource:0}: Error finding container 247860d10475d6efd2bfb1d942d9e88dd5128e08dc0fbd0c1599c03a58df673e: Status 404 returned error can't find the container with id 247860d10475d6efd2bfb1d942d9e88dd5128e08dc0fbd0c1599c03a58df673e Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.097745 5173 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tzp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-pxfmg_openshift-machine-config-operator(8a8dd347-8a1b-4551-a318-abe7c12df817): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.100184 5173 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tzp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-pxfmg_openshift-machine-config-operator(8a8dd347-8a1b-4551-a318-abe7c12df817): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.101864 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.126844 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-d24z7" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.132839 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f29a9c75-e9f9-4865-b566-af6dce495e92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:13:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:13:10.679235 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:13:10.679403 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:13:10.680393 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3111523604/tls.crt::/tmp/serving-cert-3111523604/tls.key\\\\\\\"\\\\nI1209 14:13:11.231871 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:13:11.234068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:13:11.234094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:13:11.234126 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:13:11.234133 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:13:11.238119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1209 14:13:11.238145 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 14:13:11.238151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238169 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:13:11.238180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:13:11.238183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:13:11.238187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 14:13:11.240654 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:13:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.134878 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-94z8j" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.140054 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 09 14:13:19 crc kubenswrapper[5173]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 09 14:13:19 crc kubenswrapper[5173]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7glnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-d24z7_openshift-multus(a80ae74e-7470-4168-bdc1-454fa2137d7a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.141241 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-d24z7" podUID="a80ae74e-7470-4168-bdc1-454fa2137d7a" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.143564 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" Dec 09 14:13:19 crc kubenswrapper[5173]: W1209 14:13:19.147285 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3bf0ff7_fd6f_4e6b_b94f_b6b5b67c8f37.slice/crio-8284e335e151f04dfb9e2fab7ae0932e4aa60529c1712d8b380d61d68244f53c WatchSource:0}: Error finding container 8284e335e151f04dfb9e2fab7ae0932e4aa60529c1712d8b380d61d68244f53c: Status 404 returned error can't find the container with id 8284e335e151f04dfb9e2fab7ae0932e4aa60529c1712d8b380d61d68244f53c Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.150219 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 09 14:13:19 crc kubenswrapper[5173]: set -uo pipefail Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 09 14:13:19 crc kubenswrapper[5173]: HOSTS_FILE="/etc/hosts" Dec 09 14:13:19 crc kubenswrapper[5173]: TEMP_FILE="/tmp/hosts.tmp" Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # Make a temporary file with the old hosts file's attributes. Dec 09 14:13:19 crc kubenswrapper[5173]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 09 14:13:19 crc kubenswrapper[5173]: echo "Failed to preserve hosts file. Exiting." Dec 09 14:13:19 crc kubenswrapper[5173]: exit 1 Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: while true; do Dec 09 14:13:19 crc kubenswrapper[5173]: declare -A svc_ips Dec 09 14:13:19 crc kubenswrapper[5173]: for svc in "${services[@]}"; do Dec 09 14:13:19 crc kubenswrapper[5173]: # Fetch service IP from cluster dns if present. We make several tries Dec 09 14:13:19 crc kubenswrapper[5173]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 09 14:13:19 crc kubenswrapper[5173]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 09 14:13:19 crc kubenswrapper[5173]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 09 14:13:19 crc kubenswrapper[5173]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:13:19 crc kubenswrapper[5173]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:13:19 crc kubenswrapper[5173]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:13:19 crc kubenswrapper[5173]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 09 14:13:19 crc kubenswrapper[5173]: for i in ${!cmds[*]} Dec 09 14:13:19 crc kubenswrapper[5173]: do Dec 09 14:13:19 crc kubenswrapper[5173]: ips=($(eval "${cmds[i]}")) Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: svc_ips["${svc}"]="${ips[@]}" Dec 09 14:13:19 crc kubenswrapper[5173]: break Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # Update /etc/hosts only if we get valid service IPs Dec 09 14:13:19 crc kubenswrapper[5173]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 09 14:13:19 crc kubenswrapper[5173]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 09 14:13:19 crc kubenswrapper[5173]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 09 14:13:19 crc kubenswrapper[5173]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 09 14:13:19 crc kubenswrapper[5173]: sleep 60 & wait Dec 09 14:13:19 crc kubenswrapper[5173]: continue Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # Append resolver entries for services Dec 09 14:13:19 crc kubenswrapper[5173]: rc=0 Dec 09 14:13:19 crc kubenswrapper[5173]: for svc in "${!svc_ips[@]}"; do Dec 09 14:13:19 crc kubenswrapper[5173]: for ip in ${svc_ips[${svc}]}; do Dec 09 14:13:19 crc kubenswrapper[5173]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ $rc -ne 0 ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: sleep 60 & wait Dec 09 14:13:19 crc kubenswrapper[5173]: continue Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 09 14:13:19 crc kubenswrapper[5173]: # Replace /etc/hosts with our modified version if needed Dec 09 14:13:19 crc kubenswrapper[5173]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 09 14:13:19 crc kubenswrapper[5173]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: sleep 60 & wait Dec 09 14:13:19 crc kubenswrapper[5173]: unset svc_ips Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vh9pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-94z8j_openshift-dns(a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.151255 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-trx55" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.151340 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.151402 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.151391 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-94z8j" podUID="a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.151414 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.151461 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.151479 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:19Z","lastTransitionTime":"2025-12-09T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:19 crc kubenswrapper[5173]: W1209 14:13:19.156704 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode370197d_9d3c_48ce_8973_ceed80782226.slice/crio-101a3460ba9b073727203f55fad81eab00f1b9b5e0c0d27a665c1979269e3678 WatchSource:0}: Error finding container 101a3460ba9b073727203f55fad81eab00f1b9b5e0c0d27a665c1979269e3678: Status 404 returned error can't find the container with id 101a3460ba9b073727203f55fad81eab00f1b9b5e0c0d27a665c1979269e3678 Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.157822 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.159377 5173 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5t48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-mw8tp_openshift-multus(e370197d-9d3c-48ce-8973-ceed80782226): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.160548 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" podUID="e370197d-9d3c-48ce-8973-ceed80782226" Dec 09 14:13:19 crc kubenswrapper[5173]: W1209 14:13:19.170504 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07ddf926_e4f7_4486_920c_8d83fca5b4da.slice/crio-aec09d0b30d733986639f1dabb0a479287c8f17efd8e1b77e2d9a223494532e9 WatchSource:0}: Error finding container aec09d0b30d733986639f1dabb0a479287c8f17efd8e1b77e2d9a223494532e9: Status 404 returned error can't find the container with id aec09d0b30d733986639f1dabb0a479287c8f17efd8e1b77e2d9a223494532e9 Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.170892 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72458547-4bad-48ff-be39-8828056b739c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ded317057b16388136754d75b632a51e96153d2e647d0b58e89ac5f3732b778d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b658001d1e245caf6af8b7e926021b65cf14fe05e112bd9f5ef1b3b34dbc397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8f547532154a93a64f89399378cd1ddf1d539f5ccdf318f5358ab3393b1a30ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://004085d552ba1c7640d1262d02bd33a94f35afa0dcfa640e560588a800163b1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.171114 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 09 14:13:19 crc kubenswrapper[5173]: while [ true ]; Dec 09 14:13:19 crc kubenswrapper[5173]: do Dec 09 14:13:19 crc kubenswrapper[5173]: for f in $(ls /tmp/serviceca); do Dec 09 14:13:19 crc kubenswrapper[5173]: echo $f Dec 09 14:13:19 crc kubenswrapper[5173]: ca_file_path="/tmp/serviceca/${f}" Dec 09 14:13:19 crc kubenswrapper[5173]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 09 14:13:19 crc kubenswrapper[5173]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 09 14:13:19 crc kubenswrapper[5173]: if [ -e "${reg_dir_path}" ]; then Dec 09 14:13:19 crc kubenswrapper[5173]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 09 14:13:19 crc kubenswrapper[5173]: else Dec 09 14:13:19 crc kubenswrapper[5173]: mkdir $reg_dir_path Dec 09 14:13:19 crc kubenswrapper[5173]: cp $ca_file_path $reg_dir_path/ca.crt Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: for d in $(ls /etc/docker/certs.d); do Dec 09 14:13:19 crc kubenswrapper[5173]: echo $d Dec 09 14:13:19 crc kubenswrapper[5173]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 09 14:13:19 crc kubenswrapper[5173]: reg_conf_path="/tmp/serviceca/${dp}" Dec 09 14:13:19 crc kubenswrapper[5173]: if [ ! -e "${reg_conf_path}" ]; then Dec 09 14:13:19 crc kubenswrapper[5173]: rm -rf /etc/docker/certs.d/$d Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: sleep 60 & wait ${!} Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qdhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-trx55_openshift-image-registry(9716f570-4790-4075-a3c3-42114eb7728e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.172257 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 09 14:13:19 crc kubenswrapper[5173]: set -euo pipefail Dec 09 14:13:19 crc kubenswrapper[5173]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 09 14:13:19 crc kubenswrapper[5173]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 09 14:13:19 crc kubenswrapper[5173]: # As the secret mount is optional we must wait for the files to be present. Dec 09 14:13:19 crc kubenswrapper[5173]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 09 14:13:19 crc kubenswrapper[5173]: TS=$(date +%s) Dec 09 14:13:19 crc kubenswrapper[5173]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 09 14:13:19 crc kubenswrapper[5173]: HAS_LOGGED_INFO=0 Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: log_missing_certs(){ Dec 09 14:13:19 crc kubenswrapper[5173]: CUR_TS=$(date +%s) Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 09 14:13:19 crc kubenswrapper[5173]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 09 14:13:19 crc kubenswrapper[5173]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 09 14:13:19 crc kubenswrapper[5173]: HAS_LOGGED_INFO=1 Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: } Dec 09 14:13:19 crc kubenswrapper[5173]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 09 14:13:19 crc kubenswrapper[5173]: log_missing_certs Dec 09 14:13:19 crc kubenswrapper[5173]: sleep 5 Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 09 14:13:19 crc kubenswrapper[5173]: exec /usr/bin/kube-rbac-proxy \ Dec 09 14:13:19 crc kubenswrapper[5173]: --logtostderr \ Dec 09 14:13:19 crc kubenswrapper[5173]: --secure-listen-address=:9108 \ Dec 09 14:13:19 crc kubenswrapper[5173]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 09 14:13:19 crc kubenswrapper[5173]: --upstream=http://127.0.0.1:29108/ \ Dec 09 14:13:19 crc kubenswrapper[5173]: --tls-private-key-file=${TLS_PK} \ Dec 09 14:13:19 crc kubenswrapper[5173]: --tls-cert-file=${TLS_CERT} Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdfcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-srjbf_openshift-ovn-kubernetes(07ddf926-e4f7-4486-920c-8d83fca5b4da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.172301 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-trx55" podUID="9716f570-4790-4075-a3c3-42114eb7728e" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.174846 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ -f "/env/_master" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: set -o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: source "/env/_master" Dec 09 14:13:19 crc kubenswrapper[5173]: set +o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v4_join_subnet_opt= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "" != "" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v6_join_subnet_opt= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "" != "" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v4_transit_switch_subnet_opt= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "" != "" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v6_transit_switch_subnet_opt= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "" != "" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: dns_name_resolver_enabled_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "false" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # This is needed so that converting clusters from GA to TP Dec 09 14:13:19 crc kubenswrapper[5173]: # will rollout control plane pods as well Dec 09 14:13:19 crc kubenswrapper[5173]: network_segmentation_enabled_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: multi_network_enabled_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "true" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: multi_network_enabled_flag="--enable-multi-network" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "true" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "true" != "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: multi_network_enabled_flag="--enable-multi-network" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: route_advertisements_enable_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "false" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: preconfigured_udn_addresses_enable_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "false" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # Enable multi-network policy if configured (control-plane always full mode) Dec 09 14:13:19 crc kubenswrapper[5173]: multi_network_policy_enabled_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "false" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # Enable admin network policy if configured (control-plane always full mode) Dec 09 14:13:19 crc kubenswrapper[5173]: admin_network_policy_enabled_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "true" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: if [ "shared" == "shared" ]; then Dec 09 14:13:19 crc kubenswrapper[5173]: gateway_mode_flags="--gateway-mode shared" Dec 09 14:13:19 crc kubenswrapper[5173]: elif [ "shared" == "local" ]; then Dec 09 14:13:19 crc kubenswrapper[5173]: gateway_mode_flags="--gateway-mode local" Dec 09 14:13:19 crc kubenswrapper[5173]: else Dec 09 14:13:19 crc kubenswrapper[5173]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 09 14:13:19 crc kubenswrapper[5173]: exit 1 Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 09 14:13:19 crc kubenswrapper[5173]: exec /usr/bin/ovnkube \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-interconnect \ Dec 09 14:13:19 crc kubenswrapper[5173]: --init-cluster-manager "${K8S_NODE}" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 09 14:13:19 crc kubenswrapper[5173]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --metrics-bind-address "127.0.0.1:29108" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --metrics-enable-pprof \ Dec 09 14:13:19 crc kubenswrapper[5173]: --metrics-enable-config-duration \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${ovn_v4_join_subnet_opt} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${ovn_v6_join_subnet_opt} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${dns_name_resolver_enabled_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${persistent_ips_enabled_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${multi_network_enabled_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${network_segmentation_enabled_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${gateway_mode_flags} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${route_advertisements_enable_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${preconfigured_udn_addresses_enable_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-egress-ip=true \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-egress-firewall=true \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-egress-qos=true \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-egress-service=true \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-multicast \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-multi-external-gateway=true \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${multi_network_policy_enabled_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${admin_network_policy_enabled_flag} Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdfcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-srjbf_openshift-ovn-kubernetes(07ddf926-e4f7-4486-920c-8d83fca5b4da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.175978 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" podUID="07ddf926-e4f7-4486-920c-8d83fca5b4da" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.212043 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.217944 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"7fa8f2f8df9f2742aca8f4f489a913c437db032b956b6650253feef204e7e6ba"} Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.218994 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" event={"ID":"07ddf926-e4f7-4486-920c-8d83fca5b4da","Type":"ContainerStarted","Data":"aec09d0b30d733986639f1dabb0a479287c8f17efd8e1b77e2d9a223494532e9"} Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.219517 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 09 14:13:19 crc kubenswrapper[5173]: set -o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: source /etc/kubernetes/apiserver-url.env Dec 09 14:13:19 crc kubenswrapper[5173]: else Dec 09 14:13:19 crc kubenswrapper[5173]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 09 14:13:19 crc kubenswrapper[5173]: exit 1 Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.220318 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 09 14:13:19 crc kubenswrapper[5173]: set -euo pipefail Dec 09 14:13:19 crc kubenswrapper[5173]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 09 14:13:19 crc kubenswrapper[5173]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 09 14:13:19 crc kubenswrapper[5173]: # As the secret mount is optional we must wait for the files to be present. Dec 09 14:13:19 crc kubenswrapper[5173]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 09 14:13:19 crc kubenswrapper[5173]: TS=$(date +%s) Dec 09 14:13:19 crc kubenswrapper[5173]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 09 14:13:19 crc kubenswrapper[5173]: HAS_LOGGED_INFO=0 Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: log_missing_certs(){ Dec 09 14:13:19 crc kubenswrapper[5173]: CUR_TS=$(date +%s) Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 09 14:13:19 crc kubenswrapper[5173]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 09 14:13:19 crc kubenswrapper[5173]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 09 14:13:19 crc kubenswrapper[5173]: HAS_LOGGED_INFO=1 Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: } Dec 09 14:13:19 crc kubenswrapper[5173]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 09 14:13:19 crc kubenswrapper[5173]: log_missing_certs Dec 09 14:13:19 crc kubenswrapper[5173]: sleep 5 Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 09 14:13:19 crc kubenswrapper[5173]: exec /usr/bin/kube-rbac-proxy \ Dec 09 14:13:19 crc kubenswrapper[5173]: --logtostderr \ Dec 09 14:13:19 crc kubenswrapper[5173]: --secure-listen-address=:9108 \ Dec 09 14:13:19 crc kubenswrapper[5173]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 09 14:13:19 crc kubenswrapper[5173]: --upstream=http://127.0.0.1:29108/ \ Dec 09 14:13:19 crc kubenswrapper[5173]: --tls-private-key-file=${TLS_PK} \ Dec 09 14:13:19 crc kubenswrapper[5173]: --tls-cert-file=${TLS_CERT} Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdfcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-srjbf_openshift-ovn-kubernetes(07ddf926-e4f7-4486-920c-8d83fca5b4da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.220429 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" event={"ID":"e370197d-9d3c-48ce-8973-ceed80782226","Type":"ContainerStarted","Data":"101a3460ba9b073727203f55fad81eab00f1b9b5e0c0d27a665c1979269e3678"} Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.220574 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.221899 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d24z7" event={"ID":"a80ae74e-7470-4168-bdc1-454fa2137d7a","Type":"ContainerStarted","Data":"06c6c62dda0d72fae9339994fb9928c28ea4b8c8623a7234bde41dfdb69c88a7"} Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.221923 5173 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5t48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-mw8tp_openshift-multus(e370197d-9d3c-48ce-8973-ceed80782226): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.222104 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ -f "/env/_master" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: set -o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: source "/env/_master" Dec 09 14:13:19 crc kubenswrapper[5173]: set +o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v4_join_subnet_opt= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "" != "" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v6_join_subnet_opt= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "" != "" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v4_transit_switch_subnet_opt= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "" != "" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v6_transit_switch_subnet_opt= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "" != "" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: dns_name_resolver_enabled_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "false" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # This is needed so that converting clusters from GA to TP Dec 09 14:13:19 crc kubenswrapper[5173]: # will rollout control plane pods as well Dec 09 14:13:19 crc kubenswrapper[5173]: network_segmentation_enabled_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: multi_network_enabled_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "true" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: multi_network_enabled_flag="--enable-multi-network" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "true" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "true" != "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: multi_network_enabled_flag="--enable-multi-network" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: route_advertisements_enable_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "false" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: preconfigured_udn_addresses_enable_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "false" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # Enable multi-network policy if configured (control-plane always full mode) Dec 09 14:13:19 crc kubenswrapper[5173]: multi_network_policy_enabled_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "false" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # Enable admin network policy if configured (control-plane always full mode) Dec 09 14:13:19 crc kubenswrapper[5173]: admin_network_policy_enabled_flag= Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "true" == "true" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: if [ "shared" == "shared" ]; then Dec 09 14:13:19 crc kubenswrapper[5173]: gateway_mode_flags="--gateway-mode shared" Dec 09 14:13:19 crc kubenswrapper[5173]: elif [ "shared" == "local" ]; then Dec 09 14:13:19 crc kubenswrapper[5173]: gateway_mode_flags="--gateway-mode local" Dec 09 14:13:19 crc kubenswrapper[5173]: else Dec 09 14:13:19 crc kubenswrapper[5173]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 09 14:13:19 crc kubenswrapper[5173]: exit 1 Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 09 14:13:19 crc kubenswrapper[5173]: exec /usr/bin/ovnkube \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-interconnect \ Dec 09 14:13:19 crc kubenswrapper[5173]: --init-cluster-manager "${K8S_NODE}" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 09 14:13:19 crc kubenswrapper[5173]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --metrics-bind-address "127.0.0.1:29108" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --metrics-enable-pprof \ Dec 09 14:13:19 crc kubenswrapper[5173]: --metrics-enable-config-duration \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${ovn_v4_join_subnet_opt} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${ovn_v6_join_subnet_opt} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${dns_name_resolver_enabled_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${persistent_ips_enabled_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${multi_network_enabled_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${network_segmentation_enabled_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${gateway_mode_flags} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${route_advertisements_enable_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${preconfigured_udn_addresses_enable_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-egress-ip=true \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-egress-firewall=true \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-egress-qos=true \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-egress-service=true \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-multicast \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-multi-external-gateway=true \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${multi_network_policy_enabled_flag} \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${admin_network_policy_enabled_flag} Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdfcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-srjbf_openshift-ovn-kubernetes(07ddf926-e4f7-4486-920c-8d83fca5b4da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.223023 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" podUID="e370197d-9d3c-48ce-8973-ceed80782226" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.223201 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" podUID="07ddf926-e4f7-4486-920c-8d83fca5b4da" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.223399 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerStarted","Data":"2f0e9c0d6183c1f4e13b7b4c20b32cc386f968dd4ca1323bb1e52b6123e35180"} Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.224796 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 09 14:13:19 crc kubenswrapper[5173]: apiVersion: v1 Dec 09 14:13:19 crc kubenswrapper[5173]: clusters: Dec 09 14:13:19 crc kubenswrapper[5173]: - cluster: Dec 09 14:13:19 crc kubenswrapper[5173]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 09 14:13:19 crc kubenswrapper[5173]: server: https://api-int.crc.testing:6443 Dec 09 14:13:19 crc kubenswrapper[5173]: name: default-cluster Dec 09 14:13:19 crc kubenswrapper[5173]: contexts: Dec 09 14:13:19 crc kubenswrapper[5173]: - context: Dec 09 14:13:19 crc kubenswrapper[5173]: cluster: default-cluster Dec 09 14:13:19 crc kubenswrapper[5173]: namespace: default Dec 09 14:13:19 crc kubenswrapper[5173]: user: default-auth Dec 09 14:13:19 crc kubenswrapper[5173]: name: default-context Dec 09 14:13:19 crc kubenswrapper[5173]: current-context: default-context Dec 09 14:13:19 crc kubenswrapper[5173]: kind: Config Dec 09 14:13:19 crc kubenswrapper[5173]: preferences: {} Dec 09 14:13:19 crc kubenswrapper[5173]: users: Dec 09 14:13:19 crc kubenswrapper[5173]: - name: default-auth Dec 09 14:13:19 crc kubenswrapper[5173]: user: Dec 09 14:13:19 crc kubenswrapper[5173]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 09 14:13:19 crc kubenswrapper[5173]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 09 14:13:19 crc kubenswrapper[5173]: EOF Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5p5kj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-4hj6p_openshift-ovn-kubernetes(49bec440-391d-48d9-9bc6-a14f40787067): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.224798 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 09 14:13:19 crc kubenswrapper[5173]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 09 14:13:19 crc kubenswrapper[5173]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7glnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-d24z7_openshift-multus(a80ae74e-7470-4168-bdc1-454fa2137d7a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.226473 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" podUID="49bec440-391d-48d9-9bc6-a14f40787067" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.226529 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-d24z7" podUID="a80ae74e-7470-4168-bdc1-454fa2137d7a" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.226560 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-94z8j" event={"ID":"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37","Type":"ContainerStarted","Data":"8284e335e151f04dfb9e2fab7ae0932e4aa60529c1712d8b380d61d68244f53c"} Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.227576 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerStarted","Data":"247860d10475d6efd2bfb1d942d9e88dd5128e08dc0fbd0c1599c03a58df673e"} Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.229548 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"5a65da646e390ce203d750479ed0e7ab9c5a3104fa04a5933c9deb191509037e"} Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.229763 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 09 14:13:19 crc kubenswrapper[5173]: set -uo pipefail Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 09 14:13:19 crc kubenswrapper[5173]: HOSTS_FILE="/etc/hosts" Dec 09 14:13:19 crc kubenswrapper[5173]: TEMP_FILE="/tmp/hosts.tmp" Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # Make a temporary file with the old hosts file's attributes. Dec 09 14:13:19 crc kubenswrapper[5173]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 09 14:13:19 crc kubenswrapper[5173]: echo "Failed to preserve hosts file. Exiting." Dec 09 14:13:19 crc kubenswrapper[5173]: exit 1 Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: while true; do Dec 09 14:13:19 crc kubenswrapper[5173]: declare -A svc_ips Dec 09 14:13:19 crc kubenswrapper[5173]: for svc in "${services[@]}"; do Dec 09 14:13:19 crc kubenswrapper[5173]: # Fetch service IP from cluster dns if present. We make several tries Dec 09 14:13:19 crc kubenswrapper[5173]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 09 14:13:19 crc kubenswrapper[5173]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 09 14:13:19 crc kubenswrapper[5173]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 09 14:13:19 crc kubenswrapper[5173]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:13:19 crc kubenswrapper[5173]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:13:19 crc kubenswrapper[5173]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 09 14:13:19 crc kubenswrapper[5173]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 09 14:13:19 crc kubenswrapper[5173]: for i in ${!cmds[*]} Dec 09 14:13:19 crc kubenswrapper[5173]: do Dec 09 14:13:19 crc kubenswrapper[5173]: ips=($(eval "${cmds[i]}")) Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: svc_ips["${svc}"]="${ips[@]}" Dec 09 14:13:19 crc kubenswrapper[5173]: break Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # Update /etc/hosts only if we get valid service IPs Dec 09 14:13:19 crc kubenswrapper[5173]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 09 14:13:19 crc kubenswrapper[5173]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 09 14:13:19 crc kubenswrapper[5173]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 09 14:13:19 crc kubenswrapper[5173]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 09 14:13:19 crc kubenswrapper[5173]: sleep 60 & wait Dec 09 14:13:19 crc kubenswrapper[5173]: continue Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # Append resolver entries for services Dec 09 14:13:19 crc kubenswrapper[5173]: rc=0 Dec 09 14:13:19 crc kubenswrapper[5173]: for svc in "${!svc_ips[@]}"; do Dec 09 14:13:19 crc kubenswrapper[5173]: for ip in ${svc_ips[${svc}]}; do Dec 09 14:13:19 crc kubenswrapper[5173]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ $rc -ne 0 ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: sleep 60 & wait Dec 09 14:13:19 crc kubenswrapper[5173]: continue Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 09 14:13:19 crc kubenswrapper[5173]: # Replace /etc/hosts with our modified version if needed Dec 09 14:13:19 crc kubenswrapper[5173]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 09 14:13:19 crc kubenswrapper[5173]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: sleep 60 & wait Dec 09 14:13:19 crc kubenswrapper[5173]: unset svc_ips Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vh9pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-94z8j_openshift-dns(a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.229801 5173 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tzp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-pxfmg_openshift-machine-config-operator(8a8dd347-8a1b-4551-a318-abe7c12df817): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.230541 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-trx55" event={"ID":"9716f570-4790-4075-a3c3-42114eb7728e","Type":"ContainerStarted","Data":"4f1bcff1f3401c677a77bffacda2abf7ab0a67285ad375f89fffc0d7d6633ec0"} Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.230752 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ -f "/env/_master" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: set -o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: source "/env/_master" Dec 09 14:13:19 crc kubenswrapper[5173]: set +o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 09 14:13:19 crc kubenswrapper[5173]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 09 14:13:19 crc kubenswrapper[5173]: ho_enable="--enable-hybrid-overlay" Dec 09 14:13:19 crc kubenswrapper[5173]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 09 14:13:19 crc kubenswrapper[5173]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 09 14:13:19 crc kubenswrapper[5173]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 09 14:13:19 crc kubenswrapper[5173]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:13:19 crc kubenswrapper[5173]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --webhook-host=127.0.0.1 \ Dec 09 14:13:19 crc kubenswrapper[5173]: --webhook-port=9743 \ Dec 09 14:13:19 crc kubenswrapper[5173]: ${ho_enable} \ Dec 09 14:13:19 crc kubenswrapper[5173]: --enable-interconnect \ Dec 09 14:13:19 crc kubenswrapper[5173]: --disable-approver \ Dec 09 14:13:19 crc kubenswrapper[5173]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --wait-for-kubernetes-api=200s \ Dec 09 14:13:19 crc kubenswrapper[5173]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --loglevel="${LOGLEVEL}" Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.230818 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-94z8j" podUID="a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.231657 5173 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tzp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-pxfmg_openshift-machine-config-operator(8a8dd347-8a1b-4551-a318-abe7c12df817): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.231760 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"1fba75fa22c158f14583a7ee6291d0cca135d42649cd0bdc5b8a7f43cb25501b"} Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.232011 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 09 14:13:19 crc kubenswrapper[5173]: while [ true ]; Dec 09 14:13:19 crc kubenswrapper[5173]: do Dec 09 14:13:19 crc kubenswrapper[5173]: for f in $(ls /tmp/serviceca); do Dec 09 14:13:19 crc kubenswrapper[5173]: echo $f Dec 09 14:13:19 crc kubenswrapper[5173]: ca_file_path="/tmp/serviceca/${f}" Dec 09 14:13:19 crc kubenswrapper[5173]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 09 14:13:19 crc kubenswrapper[5173]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 09 14:13:19 crc kubenswrapper[5173]: if [ -e "${reg_dir_path}" ]; then Dec 09 14:13:19 crc kubenswrapper[5173]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 09 14:13:19 crc kubenswrapper[5173]: else Dec 09 14:13:19 crc kubenswrapper[5173]: mkdir $reg_dir_path Dec 09 14:13:19 crc kubenswrapper[5173]: cp $ca_file_path $reg_dir_path/ca.crt Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: for d in $(ls /etc/docker/certs.d); do Dec 09 14:13:19 crc kubenswrapper[5173]: echo $d Dec 09 14:13:19 crc kubenswrapper[5173]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 09 14:13:19 crc kubenswrapper[5173]: reg_conf_path="/tmp/serviceca/${dp}" Dec 09 14:13:19 crc kubenswrapper[5173]: if [ ! -e "${reg_conf_path}" ]; then Dec 09 14:13:19 crc kubenswrapper[5173]: rm -rf /etc/docker/certs.d/$d Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: sleep 60 & wait ${!} Dec 09 14:13:19 crc kubenswrapper[5173]: done Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qdhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-trx55_openshift-image-registry(9716f570-4790-4075-a3c3-42114eb7728e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.232884 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.233065 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-trx55" podUID="9716f570-4790-4075-a3c3-42114eb7728e" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.233082 5173 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 09 14:13:19 crc kubenswrapper[5173]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 09 14:13:19 crc kubenswrapper[5173]: if [[ -f "/env/_master" ]]; then Dec 09 14:13:19 crc kubenswrapper[5173]: set -o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: source "/env/_master" Dec 09 14:13:19 crc kubenswrapper[5173]: set +o allexport Dec 09 14:13:19 crc kubenswrapper[5173]: fi Dec 09 14:13:19 crc kubenswrapper[5173]: Dec 09 14:13:19 crc kubenswrapper[5173]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 09 14:13:19 crc kubenswrapper[5173]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 09 14:13:19 crc kubenswrapper[5173]: --disable-webhook \ Dec 09 14:13:19 crc kubenswrapper[5173]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 09 14:13:19 crc kubenswrapper[5173]: --loglevel="${LOGLEVEL}" Dec 09 14:13:19 crc kubenswrapper[5173]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 09 14:13:19 crc kubenswrapper[5173]: > logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.233079 5173 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.235184 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.235218 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.250333 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.254036 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.254091 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.254102 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.254119 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.254129 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:19Z","lastTransitionTime":"2025-12-09T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.291828 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e370197d-9d3c-48ce-8973-ceed80782226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mw8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.315262 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.315422 5173 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.315471 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:20.315457595 +0000 UTC m=+83.240739842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.316248 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.316412 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.316435 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.316448 5173 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.316498 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:20.316484767 +0000 UTC m=+83.241767014 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.316558 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.316647 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.316666 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.316675 5173 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.316702 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:20.316693874 +0000 UTC m=+83.241976121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.316747 5173 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.316590 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.316827 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:20.316807727 +0000 UTC m=+83.242089984 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.327404 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lbnx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lbnx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.356404 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.356460 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.356472 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.356489 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.356503 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:19Z","lastTransitionTime":"2025-12-09T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.370307 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ddf926-e4f7-4486-920c-8d83fca5b4da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-srjbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.410799 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66859a4-682c-4aac-9f59-8077ef0987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://22111dcf300ec536cc6a1016634e372dd581bfc8c1965f1ef72025eca7bd27a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://16ff70fe83260431bb761ab05817e149e47d0aa9773fad494524d389e0eb98ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://24bfde81209161c48b816ce80d7a29d805ef660302aa6b8a9350fc545c7f8727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.418169 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.418394 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:20.418326716 +0000 UTC m=+83.343608963 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.450388 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.459084 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.459138 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.459150 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.459166 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.459177 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:19Z","lastTransitionTime":"2025-12-09T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.490298 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a8dd347-8a1b-4551-a318-abe7c12df817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pxfmg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.519276 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs\") pod \"network-metrics-daemon-lbnx5\" (UID: \"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\") " pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.519505 5173 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.519615 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs podName:5d73c2ad-08e4-439f-8c5f-adb67b27ef4b nodeName:}" failed. No retries permitted until 2025-12-09 14:13:20.519591718 +0000 UTC m=+83.444874055 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs") pod "network-metrics-daemon-lbnx5" (UID: "5d73c2ad-08e4-439f-8c5f-adb67b27ef4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.539520 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9bf6317-206d-45f3-b5c6-d074a93429f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://07cb68ad1d7939b032d461e4405874dbea3c0c580d711c636b9c1bc98534ddad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://40690c3e060def2a504e5e96407e7e684a5d65be6a03e3c0c2964c5613ac3a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1501e862b689b4aabc3ad6a8aa5f8021ccdf06efb17e8f190b8a58d3a57b778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://78cdb950caf4d3cbe020e51b49b41823961f04a520144ddc0f055b1ac4015773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://54680e71891f8c4b8d3378c6a2cebfadccf93498ccbb0cf6da1b23063f9256eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.563506 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.563546 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.563555 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.563573 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.563583 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:19Z","lastTransitionTime":"2025-12-09T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.570875 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.610726 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-d24z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80ae74e-7470-4168-bdc1-454fa2137d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7glnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d24z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.648925 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-94z8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh9pw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94z8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.665644 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.665701 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.665713 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.665731 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.665742 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:19Z","lastTransitionTime":"2025-12-09T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.689963 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9716f570-4790-4075-a3c3-42114eb7728e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qdhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.729373 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d7eab1-c137-4702-9f40-82ffc645bd99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://843a523bdd75f421c91ce69ed248e099d8a783680b394eca105778950f9d908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.769067 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.769107 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.769152 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.769173 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.769186 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:19Z","lastTransitionTime":"2025-12-09T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.770517 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.809716 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.852729 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f29a9c75-e9f9-4865-b566-af6dce495e92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:13:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:13:10.679235 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:13:10.679403 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:13:10.680393 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3111523604/tls.crt::/tmp/serving-cert-3111523604/tls.key\\\\\\\"\\\\nI1209 14:13:11.231871 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:13:11.234068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:13:11.234094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:13:11.234126 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:13:11.234133 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:13:11.238119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1209 14:13:11.238145 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 14:13:11.238151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238169 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:13:11.238180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:13:11.238183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:13:11.238187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 14:13:11.240654 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:13:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.870410 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:19 crc kubenswrapper[5173]: E1209 14:13:19.870598 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.871906 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.871950 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.871962 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.871978 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.871988 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:19Z","lastTransitionTime":"2025-12-09T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.874798 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.875729 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.877672 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.879448 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.881496 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.882931 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.884258 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.885368 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.885896 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.887186 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.887916 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.890180 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.890939 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.891598 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72458547-4bad-48ff-be39-8828056b739c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ded317057b16388136754d75b632a51e96153d2e647d0b58e89ac5f3732b778d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b658001d1e245caf6af8b7e926021b65cf14fe05e112bd9f5ef1b3b34dbc397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8f547532154a93a64f89399378cd1ddf1d539f5ccdf318f5358ab3393b1a30ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://004085d552ba1c7640d1262d02bd33a94f35afa0dcfa640e560588a800163b1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.892675 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.893111 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.893867 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.894982 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.896039 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.897183 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.898050 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.898830 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.900487 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.901445 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.902261 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.903434 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.904761 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.906307 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.907003 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.909230 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.910284 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.911367 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.912499 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.913584 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.914764 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.915489 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.916300 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.916929 5173 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.917031 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.920181 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.921141 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.922306 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.923130 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.924460 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.925978 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.927823 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.928598 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.930269 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.930555 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.931859 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.933704 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.935150 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.936874 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.937889 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.939686 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.941420 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.943899 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.944806 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.946247 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.947233 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.970773 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.974382 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.974430 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.974447 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.974468 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:19 crc kubenswrapper[5173]: I1209 14:13:19.974487 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:19Z","lastTransitionTime":"2025-12-09T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.014732 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e370197d-9d3c-48ce-8973-ceed80782226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mw8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.050550 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lbnx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lbnx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.076703 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.076753 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.076764 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.076781 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.076793 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:20Z","lastTransitionTime":"2025-12-09T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.089905 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ddf926-e4f7-4486-920c-8d83fca5b4da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-srjbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.132787 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66859a4-682c-4aac-9f59-8077ef0987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://22111dcf300ec536cc6a1016634e372dd581bfc8c1965f1ef72025eca7bd27a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://16ff70fe83260431bb761ab05817e149e47d0aa9773fad494524d389e0eb98ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://24bfde81209161c48b816ce80d7a29d805ef660302aa6b8a9350fc545c7f8727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.171272 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.179039 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.179145 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.179169 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.179206 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.179267 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:20Z","lastTransitionTime":"2025-12-09T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.190904 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.191607 5173 scope.go:117] "RemoveContainer" containerID="c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d" Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.191900 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.214363 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49bec440-391d-48d9-9bc6-a14f40787067\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4hj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.281077 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.281113 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.281131 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.281147 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.281157 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:20Z","lastTransitionTime":"2025-12-09T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.327768 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.327836 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.327858 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.327875 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.327993 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.328006 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.328016 5173 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.328046 5173 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.328067 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.328063 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:22.32805027 +0000 UTC m=+85.253332517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.328110 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.328226 5173 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.328267 5173 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.328133 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:22.328113672 +0000 UTC m=+85.253395929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.328403 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:22.32837453 +0000 UTC m=+85.253656777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.328525 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:22.328510875 +0000 UTC m=+85.253793122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.382952 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.383007 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.383019 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.383037 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.383050 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:20Z","lastTransitionTime":"2025-12-09T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.428979 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.429132 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:22.429103055 +0000 UTC m=+85.354385302 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.485516 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.485559 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.485568 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.485583 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.485594 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:20Z","lastTransitionTime":"2025-12-09T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.530751 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs\") pod \"network-metrics-daemon-lbnx5\" (UID: \"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\") " pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.530903 5173 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.530961 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs podName:5d73c2ad-08e4-439f-8c5f-adb67b27ef4b nodeName:}" failed. No retries permitted until 2025-12-09 14:13:22.530946046 +0000 UTC m=+85.456228293 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs") pod "network-metrics-daemon-lbnx5" (UID: "5d73c2ad-08e4-439f-8c5f-adb67b27ef4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.587830 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.587966 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.588083 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.588108 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.588121 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:20Z","lastTransitionTime":"2025-12-09T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.690416 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.690467 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.690480 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.690495 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.690505 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:20Z","lastTransitionTime":"2025-12-09T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.792388 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.792453 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.792469 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.792490 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.792505 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:20Z","lastTransitionTime":"2025-12-09T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.870694 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.870815 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.870840 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.870965 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.871126 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:20 crc kubenswrapper[5173]: E1209 14:13:20.871268 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.895325 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.895376 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.895386 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.895400 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.895414 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:20Z","lastTransitionTime":"2025-12-09T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.997853 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.997916 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.997930 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.997948 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:20 crc kubenswrapper[5173]: I1209 14:13:20.997962 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:20Z","lastTransitionTime":"2025-12-09T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.101251 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.101330 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.101431 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.101477 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.101500 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:21Z","lastTransitionTime":"2025-12-09T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.204499 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.204547 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.204560 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.204576 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.204590 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:21Z","lastTransitionTime":"2025-12-09T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.307280 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.307325 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.307334 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.307385 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.307403 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:21Z","lastTransitionTime":"2025-12-09T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.410088 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.410545 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.410740 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.410954 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.411138 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:21Z","lastTransitionTime":"2025-12-09T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.513276 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.513331 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.513348 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.513390 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.513405 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:21Z","lastTransitionTime":"2025-12-09T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.616072 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.616156 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.616170 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.616195 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.616208 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:21Z","lastTransitionTime":"2025-12-09T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.718348 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.718433 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.718471 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.718511 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.718534 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:21Z","lastTransitionTime":"2025-12-09T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.820773 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.821038 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.821099 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.821157 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.821211 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:21Z","lastTransitionTime":"2025-12-09T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.877276 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:21 crc kubenswrapper[5173]: E1209 14:13:21.877588 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.923174 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.923213 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.923223 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.923236 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:21 crc kubenswrapper[5173]: I1209 14:13:21.923245 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:21Z","lastTransitionTime":"2025-12-09T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.025490 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.025831 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.025920 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.026002 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.026079 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:22Z","lastTransitionTime":"2025-12-09T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.129044 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.129837 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.129939 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.130038 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.130138 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:22Z","lastTransitionTime":"2025-12-09T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.232026 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.232429 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.232645 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.232789 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.232908 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:22Z","lastTransitionTime":"2025-12-09T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.334857 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.334913 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.334925 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.334942 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.334956 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:22Z","lastTransitionTime":"2025-12-09T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.352721 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.352776 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.352807 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.352841 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.352956 5173 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.352979 5173 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.353028 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:26.353008264 +0000 UTC m=+89.278290521 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.353039 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.353075 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.353093 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:26.353060816 +0000 UTC m=+89.278343063 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.353094 5173 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.353183 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:26.353152199 +0000 UTC m=+89.278434506 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.353196 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.353210 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.353225 5173 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.353266 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:26.353246912 +0000 UTC m=+89.278529159 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.437915 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.437970 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.437985 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.438007 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.438022 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:22Z","lastTransitionTime":"2025-12-09T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.454026 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.454247 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:26.454217775 +0000 UTC m=+89.379500042 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.540750 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.540818 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.540839 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.540866 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.540887 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:22Z","lastTransitionTime":"2025-12-09T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.554999 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs\") pod \"network-metrics-daemon-lbnx5\" (UID: \"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\") " pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.555511 5173 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.556657 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs podName:5d73c2ad-08e4-439f-8c5f-adb67b27ef4b nodeName:}" failed. No retries permitted until 2025-12-09 14:13:26.556629922 +0000 UTC m=+89.481912169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs") pod "network-metrics-daemon-lbnx5" (UID: "5d73c2ad-08e4-439f-8c5f-adb67b27ef4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.643807 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.643896 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.643922 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.643953 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.643976 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:22Z","lastTransitionTime":"2025-12-09T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.746386 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.746453 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.746471 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.746497 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.746516 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:22Z","lastTransitionTime":"2025-12-09T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.849034 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.849071 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.849080 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.849093 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.849104 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:22Z","lastTransitionTime":"2025-12-09T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.869714 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.869879 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.869885 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.869976 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.869994 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:22 crc kubenswrapper[5173]: E1209 14:13:22.870148 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.951311 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.951423 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.951455 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.951488 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:22 crc kubenswrapper[5173]: I1209 14:13:22.951511 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:22Z","lastTransitionTime":"2025-12-09T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.054403 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.054493 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.054516 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.054563 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.054598 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:23Z","lastTransitionTime":"2025-12-09T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.108966 5173 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.156477 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.156524 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.156537 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.156551 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.156560 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:23Z","lastTransitionTime":"2025-12-09T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.258342 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.258450 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.258469 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.258497 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.258513 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:23Z","lastTransitionTime":"2025-12-09T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.360713 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.360764 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.360774 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.360790 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.360802 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:23Z","lastTransitionTime":"2025-12-09T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.463041 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.463126 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.463155 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.463188 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.463210 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:23Z","lastTransitionTime":"2025-12-09T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.485157 5173 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.564997 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.565053 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.565068 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.565088 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.565106 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:23Z","lastTransitionTime":"2025-12-09T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.668386 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.668451 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.668474 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.668501 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.668522 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:23Z","lastTransitionTime":"2025-12-09T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.770927 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.770973 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.770982 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.770997 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.771008 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:23Z","lastTransitionTime":"2025-12-09T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.872437 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.872473 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.872482 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.872495 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.872505 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:23Z","lastTransitionTime":"2025-12-09T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.874733 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:23 crc kubenswrapper[5173]: E1209 14:13:23.874879 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.974703 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.974770 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.974788 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.974811 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:23 crc kubenswrapper[5173]: I1209 14:13:23.974828 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:23Z","lastTransitionTime":"2025-12-09T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.077337 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.077444 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.077470 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.077503 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.077525 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:24Z","lastTransitionTime":"2025-12-09T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.179776 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.179867 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.179883 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.179900 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.179912 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:24Z","lastTransitionTime":"2025-12-09T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.282427 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.282509 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.282521 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.282559 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.282574 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:24Z","lastTransitionTime":"2025-12-09T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.384843 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.384911 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.384924 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.384941 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.384955 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:24Z","lastTransitionTime":"2025-12-09T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.487595 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.487678 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.487702 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.487732 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.487753 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:24Z","lastTransitionTime":"2025-12-09T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.593308 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.593457 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.593533 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.593566 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.593597 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:24Z","lastTransitionTime":"2025-12-09T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.697587 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.697633 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.697649 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.697665 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.697678 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:24Z","lastTransitionTime":"2025-12-09T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.800597 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.800676 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.800700 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.800727 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.800747 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:24Z","lastTransitionTime":"2025-12-09T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.870181 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.870181 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.870181 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:24 crc kubenswrapper[5173]: E1209 14:13:24.870541 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:24 crc kubenswrapper[5173]: E1209 14:13:24.870381 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:24 crc kubenswrapper[5173]: E1209 14:13:24.870672 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.903509 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.903569 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.903731 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.903778 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:24 crc kubenswrapper[5173]: I1209 14:13:24.903793 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:24Z","lastTransitionTime":"2025-12-09T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.005798 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.005855 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.005874 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.005892 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.005904 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:25Z","lastTransitionTime":"2025-12-09T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.107691 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.107749 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.107762 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.107781 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.107794 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:25Z","lastTransitionTime":"2025-12-09T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.210196 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.210331 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.210401 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.210456 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.210483 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:25Z","lastTransitionTime":"2025-12-09T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.313535 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.313932 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.314093 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.314309 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.314567 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:25Z","lastTransitionTime":"2025-12-09T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.417462 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.417893 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.418027 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.418179 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.418333 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:25Z","lastTransitionTime":"2025-12-09T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.521603 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.521661 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.521671 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.521689 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.521701 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:25Z","lastTransitionTime":"2025-12-09T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.624404 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.624468 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.624482 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.624504 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.624520 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:25Z","lastTransitionTime":"2025-12-09T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.727536 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.727606 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.727617 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.727637 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.727648 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:25Z","lastTransitionTime":"2025-12-09T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.830692 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.830791 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.830826 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.830855 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.830873 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:25Z","lastTransitionTime":"2025-12-09T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.870547 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:25 crc kubenswrapper[5173]: E1209 14:13:25.870817 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.933814 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.933890 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.933912 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.933954 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:25 crc kubenswrapper[5173]: I1209 14:13:25.933972 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:25Z","lastTransitionTime":"2025-12-09T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.037565 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.037615 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.037627 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.037647 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.037664 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:26Z","lastTransitionTime":"2025-12-09T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.140238 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.140306 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.140324 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.140346 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.140399 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:26Z","lastTransitionTime":"2025-12-09T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.243328 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.243416 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.243431 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.243449 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.243463 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:26Z","lastTransitionTime":"2025-12-09T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.346967 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.347025 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.347042 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.347061 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.347078 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:26Z","lastTransitionTime":"2025-12-09T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.402936 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.403016 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.403038 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.403064 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.403153 5173 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.403206 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:34.403192851 +0000 UTC m=+97.328475098 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.403233 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.403287 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.403304 5173 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.403315 5173 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.403233 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.403418 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:34.403392198 +0000 UTC m=+97.328674465 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.403428 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.403444 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:34.403432919 +0000 UTC m=+97.328715256 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.403450 5173 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.403526 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:34.403505911 +0000 UTC m=+97.328788188 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.449771 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.449823 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.449837 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.449855 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.449870 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:26Z","lastTransitionTime":"2025-12-09T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.503693 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.503919 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:34.503889175 +0000 UTC m=+97.429171422 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.552896 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.552974 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.552988 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.553005 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.553017 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:26Z","lastTransitionTime":"2025-12-09T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.605889 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs\") pod \"network-metrics-daemon-lbnx5\" (UID: \"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\") " pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.606193 5173 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.606344 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs podName:5d73c2ad-08e4-439f-8c5f-adb67b27ef4b nodeName:}" failed. No retries permitted until 2025-12-09 14:13:34.606311534 +0000 UTC m=+97.531593821 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs") pod "network-metrics-daemon-lbnx5" (UID: "5d73c2ad-08e4-439f-8c5f-adb67b27ef4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.655549 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.656713 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.657257 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.657540 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.658161 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:26Z","lastTransitionTime":"2025-12-09T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.761136 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.761186 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.761199 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.761215 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.761226 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:26Z","lastTransitionTime":"2025-12-09T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.864850 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.864928 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.864946 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.864970 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.864990 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:26Z","lastTransitionTime":"2025-12-09T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.870224 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.870278 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.870300 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.870439 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.870621 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:26 crc kubenswrapper[5173]: E1209 14:13:26.870801 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.967074 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.967119 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.967131 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.967147 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:26 crc kubenswrapper[5173]: I1209 14:13:26.967162 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:26Z","lastTransitionTime":"2025-12-09T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.069785 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.069828 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.069837 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.069851 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.069862 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:27Z","lastTransitionTime":"2025-12-09T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.171880 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.171943 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.171962 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.171983 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.171998 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:27Z","lastTransitionTime":"2025-12-09T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.274447 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.274539 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.274605 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.274623 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.274638 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:27Z","lastTransitionTime":"2025-12-09T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.377651 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.377707 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.377716 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.377734 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.377747 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:27Z","lastTransitionTime":"2025-12-09T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.480727 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.480868 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.480879 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.480896 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.480907 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:27Z","lastTransitionTime":"2025-12-09T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.583197 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.583304 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.583341 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.583439 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.583463 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:27Z","lastTransitionTime":"2025-12-09T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.686631 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.686715 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.686738 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.686766 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.686787 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:27Z","lastTransitionTime":"2025-12-09T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.790027 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.790082 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.790092 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.790109 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.790120 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:27Z","lastTransitionTime":"2025-12-09T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.870456 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:27 crc kubenswrapper[5173]: E1209 14:13:27.870664 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.890324 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49bec440-391d-48d9-9bc6-a14f40787067\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4hj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.892388 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.892446 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.892468 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.892492 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.892513 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:27Z","lastTransitionTime":"2025-12-09T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.902509 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a8dd347-8a1b-4551-a318-abe7c12df817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pxfmg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.926325 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9bf6317-206d-45f3-b5c6-d074a93429f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://07cb68ad1d7939b032d461e4405874dbea3c0c580d711c636b9c1bc98534ddad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://40690c3e060def2a504e5e96407e7e684a5d65be6a03e3c0c2964c5613ac3a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1501e862b689b4aabc3ad6a8aa5f8021ccdf06efb17e8f190b8a58d3a57b778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://78cdb950caf4d3cbe020e51b49b41823961f04a520144ddc0f055b1ac4015773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://54680e71891f8c4b8d3378c6a2cebfadccf93498ccbb0cf6da1b23063f9256eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.940020 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.952572 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-d24z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80ae74e-7470-4168-bdc1-454fa2137d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7glnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d24z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.963660 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-94z8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh9pw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94z8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.972229 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9716f570-4790-4075-a3c3-42114eb7728e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qdhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.981514 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d7eab1-c137-4702-9f40-82ffc645bd99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://843a523bdd75f421c91ce69ed248e099d8a783680b394eca105778950f9d908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.995286 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.995343 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.995388 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.995412 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.995428 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:27Z","lastTransitionTime":"2025-12-09T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:27 crc kubenswrapper[5173]: I1209 14:13:27.996725 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.007778 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.019828 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f29a9c75-e9f9-4865-b566-af6dce495e92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:13:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:13:10.679235 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:13:10.679403 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:13:10.680393 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3111523604/tls.crt::/tmp/serving-cert-3111523604/tls.key\\\\\\\"\\\\nI1209 14:13:11.231871 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:13:11.234068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:13:11.234094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:13:11.234126 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:13:11.234133 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:13:11.238119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1209 14:13:11.238145 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 14:13:11.238151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238169 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:13:11.238180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:13:11.238183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:13:11.238187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 14:13:11.240654 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:13:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.032533 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72458547-4bad-48ff-be39-8828056b739c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ded317057b16388136754d75b632a51e96153d2e647d0b58e89ac5f3732b778d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b658001d1e245caf6af8b7e926021b65cf14fe05e112bd9f5ef1b3b34dbc397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8f547532154a93a64f89399378cd1ddf1d539f5ccdf318f5358ab3393b1a30ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://004085d552ba1c7640d1262d02bd33a94f35afa0dcfa640e560588a800163b1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.047166 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.059234 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.079235 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e370197d-9d3c-48ce-8973-ceed80782226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mw8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.089648 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lbnx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lbnx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.098506 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.098576 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.098593 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.098616 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.098631 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.104183 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ddf926-e4f7-4486-920c-8d83fca5b4da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-srjbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.115836 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66859a4-682c-4aac-9f59-8077ef0987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://22111dcf300ec536cc6a1016634e372dd581bfc8c1965f1ef72025eca7bd27a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://16ff70fe83260431bb761ab05817e149e47d0aa9773fad494524d389e0eb98ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://24bfde81209161c48b816ce80d7a29d805ef660302aa6b8a9350fc545c7f8727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.126547 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.200002 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.200145 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.200158 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.200176 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.200189 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.301976 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.302051 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.302073 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.302099 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.302119 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.404870 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.404938 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.404952 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.404972 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.404988 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.508552 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.508625 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.508650 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.508680 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.508708 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.596107 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.596586 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.596618 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.596640 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.596653 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: E1209 14:13:28.613296 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.617513 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.617595 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.617610 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.617650 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.617664 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: E1209 14:13:28.628927 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.633250 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.633293 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.633318 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.633333 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.633343 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: E1209 14:13:28.643452 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.647232 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.647273 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.647287 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.647304 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.647318 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: E1209 14:13:28.656775 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.660179 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.660221 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.660233 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.660252 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.660265 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: E1209 14:13:28.674485 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7d8a1fb4-b79b-40c8-87ab-701c2aec36f3\\\",\\\"systemUUID\\\":\\\"b723954a-7a7f-4e69-bb6f-4921ffb1c94e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:28 crc kubenswrapper[5173]: E1209 14:13:28.674753 5173 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.676147 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.676186 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.676203 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.676225 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.676242 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.778604 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.778663 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.778677 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.778696 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.778708 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.870376 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:28 crc kubenswrapper[5173]: E1209 14:13:28.870560 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.870376 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:28 crc kubenswrapper[5173]: E1209 14:13:28.870669 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.870689 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:28 crc kubenswrapper[5173]: E1209 14:13:28.870738 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.882000 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.882034 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.882043 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.882058 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.882068 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.985102 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.985186 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.985208 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.985228 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:28 crc kubenswrapper[5173]: I1209 14:13:28.985242 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:28Z","lastTransitionTime":"2025-12-09T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.087939 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.087991 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.088004 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.088021 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.088034 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:29Z","lastTransitionTime":"2025-12-09T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.190652 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.190746 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.190774 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.190811 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.190838 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:29Z","lastTransitionTime":"2025-12-09T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.293103 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.293153 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.293163 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.293179 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.293188 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:29Z","lastTransitionTime":"2025-12-09T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.395126 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.395178 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.395190 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.395207 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.395218 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:29Z","lastTransitionTime":"2025-12-09T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.497571 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.497625 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.497637 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.497653 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.497665 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:29Z","lastTransitionTime":"2025-12-09T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.600567 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.600634 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.600651 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.600673 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.600688 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:29Z","lastTransitionTime":"2025-12-09T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.703272 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.703321 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.703334 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.703376 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.703398 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:29Z","lastTransitionTime":"2025-12-09T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.805513 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.805572 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.805585 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.805604 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.805618 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:29Z","lastTransitionTime":"2025-12-09T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.876578 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:29 crc kubenswrapper[5173]: E1209 14:13:29.876759 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.909286 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.909417 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.909447 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.909519 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:29 crc kubenswrapper[5173]: I1209 14:13:29.909538 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:29Z","lastTransitionTime":"2025-12-09T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.013474 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.013562 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.013581 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.013610 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.013630 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:30Z","lastTransitionTime":"2025-12-09T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.116868 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.116952 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.116974 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.117000 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.117018 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:30Z","lastTransitionTime":"2025-12-09T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.220385 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.220776 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.220789 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.220807 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.220823 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:30Z","lastTransitionTime":"2025-12-09T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.324137 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.324451 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.324548 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.324646 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.324710 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:30Z","lastTransitionTime":"2025-12-09T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.428031 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.428109 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.428124 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.428148 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.428171 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:30Z","lastTransitionTime":"2025-12-09T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.530859 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.530920 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.530937 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.530959 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.530971 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:30Z","lastTransitionTime":"2025-12-09T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.634078 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.634124 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.634135 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.634151 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.634162 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:30Z","lastTransitionTime":"2025-12-09T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.736647 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.736743 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.736769 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.736803 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.736830 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:30Z","lastTransitionTime":"2025-12-09T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.839661 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.839709 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.839720 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.839737 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.839749 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:30Z","lastTransitionTime":"2025-12-09T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.870030 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.870093 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.870248 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:30 crc kubenswrapper[5173]: E1209 14:13:30.870255 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:30 crc kubenswrapper[5173]: E1209 14:13:30.870493 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:30 crc kubenswrapper[5173]: E1209 14:13:30.870671 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.942690 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.942785 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.942821 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.942856 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:30 crc kubenswrapper[5173]: I1209 14:13:30.942883 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:30Z","lastTransitionTime":"2025-12-09T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.045985 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.046078 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.046105 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.046134 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.046157 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:31Z","lastTransitionTime":"2025-12-09T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.149542 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.149621 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.149641 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.149678 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.149703 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:31Z","lastTransitionTime":"2025-12-09T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.252277 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.252395 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.252416 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.252441 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.252459 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:31Z","lastTransitionTime":"2025-12-09T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.265578 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d24z7" event={"ID":"a80ae74e-7470-4168-bdc1-454fa2137d7a","Type":"ContainerStarted","Data":"f460a1644c18f7865af7796a312778249adc6d2e94346f6d2c914bd68f28e0d0"} Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.298838 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9bf6317-206d-45f3-b5c6-d074a93429f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://07cb68ad1d7939b032d461e4405874dbea3c0c580d711c636b9c1bc98534ddad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://40690c3e060def2a504e5e96407e7e684a5d65be6a03e3c0c2964c5613ac3a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1501e862b689b4aabc3ad6a8aa5f8021ccdf06efb17e8f190b8a58d3a57b778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://78cdb950caf4d3cbe020e51b49b41823961f04a520144ddc0f055b1ac4015773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://54680e71891f8c4b8d3378c6a2cebfadccf93498ccbb0cf6da1b23063f9256eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.311016 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.322186 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-d24z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80ae74e-7470-4168-bdc1-454fa2137d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://f460a1644c18f7865af7796a312778249adc6d2e94346f6d2c914bd68f28e0d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7glnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d24z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.331227 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-94z8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh9pw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94z8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.340901 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9716f570-4790-4075-a3c3-42114eb7728e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qdhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.348908 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d7eab1-c137-4702-9f40-82ffc645bd99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://843a523bdd75f421c91ce69ed248e099d8a783680b394eca105778950f9d908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.354780 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.354823 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.354840 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.354862 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.354877 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:31Z","lastTransitionTime":"2025-12-09T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.360561 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.371670 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.384517 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f29a9c75-e9f9-4865-b566-af6dce495e92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:13:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:13:10.679235 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:13:10.679403 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:13:10.680393 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3111523604/tls.crt::/tmp/serving-cert-3111523604/tls.key\\\\\\\"\\\\nI1209 14:13:11.231871 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:13:11.234068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:13:11.234094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:13:11.234126 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:13:11.234133 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:13:11.238119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1209 14:13:11.238145 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 14:13:11.238151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238169 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:13:11.238180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:13:11.238183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:13:11.238187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 14:13:11.240654 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:13:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.397742 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72458547-4bad-48ff-be39-8828056b739c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ded317057b16388136754d75b632a51e96153d2e647d0b58e89ac5f3732b778d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b658001d1e245caf6af8b7e926021b65cf14fe05e112bd9f5ef1b3b34dbc397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8f547532154a93a64f89399378cd1ddf1d539f5ccdf318f5358ab3393b1a30ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://004085d552ba1c7640d1262d02bd33a94f35afa0dcfa640e560588a800163b1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.412503 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.421196 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.435241 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e370197d-9d3c-48ce-8973-ceed80782226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mw8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.444195 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lbnx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lbnx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.455452 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ddf926-e4f7-4486-920c-8d83fca5b4da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-srjbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.456441 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.456474 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.456482 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.456494 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.456504 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:31Z","lastTransitionTime":"2025-12-09T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.468551 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66859a4-682c-4aac-9f59-8077ef0987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://22111dcf300ec536cc6a1016634e372dd581bfc8c1965f1ef72025eca7bd27a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://16ff70fe83260431bb761ab05817e149e47d0aa9773fad494524d389e0eb98ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://24bfde81209161c48b816ce80d7a29d805ef660302aa6b8a9350fc545c7f8727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.480722 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.496950 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49bec440-391d-48d9-9bc6-a14f40787067\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4hj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.505087 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a8dd347-8a1b-4551-a318-abe7c12df817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pxfmg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.558611 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.558667 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.558680 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.558696 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.558708 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:31Z","lastTransitionTime":"2025-12-09T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.660689 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.660735 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.660747 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.660765 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.660776 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:31Z","lastTransitionTime":"2025-12-09T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.763310 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.763345 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.763369 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.763381 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.763393 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:31Z","lastTransitionTime":"2025-12-09T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.865203 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.865250 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.865264 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.865280 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.865293 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:31Z","lastTransitionTime":"2025-12-09T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.869904 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:31 crc kubenswrapper[5173]: E1209 14:13:31.870027 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.967406 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.967468 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.967489 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.967521 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:31 crc kubenswrapper[5173]: I1209 14:13:31.967548 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:31Z","lastTransitionTime":"2025-12-09T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.070889 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.070927 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.070937 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.070951 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.070960 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:32Z","lastTransitionTime":"2025-12-09T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.172877 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.172919 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.172929 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.172944 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.172964 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:32Z","lastTransitionTime":"2025-12-09T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.270880 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" event={"ID":"07ddf926-e4f7-4486-920c-8d83fca5b4da","Type":"ContainerStarted","Data":"655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500"} Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.272467 5173 generic.go:358] "Generic (PLEG): container finished" podID="49bec440-391d-48d9-9bc6-a14f40787067" containerID="231e33eb8ad573ef8c7345edad3d84a71079fc1f80d66033422174e4d361015f" exitCode=0 Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.272525 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerDied","Data":"231e33eb8ad573ef8c7345edad3d84a71079fc1f80d66033422174e4d361015f"} Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.274230 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.274430 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.274572 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.274602 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.274618 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:32Z","lastTransitionTime":"2025-12-09T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.288341 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f29a9c75-e9f9-4865-b566-af6dce495e92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:13:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:13:10.679235 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:13:10.679403 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:13:10.680393 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3111523604/tls.crt::/tmp/serving-cert-3111523604/tls.key\\\\\\\"\\\\nI1209 14:13:11.231871 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:13:11.234068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:13:11.234094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:13:11.234126 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:13:11.234133 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:13:11.238119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1209 14:13:11.238145 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 14:13:11.238151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238169 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:13:11.238180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:13:11.238183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:13:11.238187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 14:13:11.240654 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:13:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.299721 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72458547-4bad-48ff-be39-8828056b739c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ded317057b16388136754d75b632a51e96153d2e647d0b58e89ac5f3732b778d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b658001d1e245caf6af8b7e926021b65cf14fe05e112bd9f5ef1b3b34dbc397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8f547532154a93a64f89399378cd1ddf1d539f5ccdf318f5358ab3393b1a30ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://004085d552ba1c7640d1262d02bd33a94f35afa0dcfa640e560588a800163b1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.309142 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.321442 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.333204 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e370197d-9d3c-48ce-8973-ceed80782226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mw8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.341114 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lbnx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lbnx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.348770 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ddf926-e4f7-4486-920c-8d83fca5b4da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-srjbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.357265 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66859a4-682c-4aac-9f59-8077ef0987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://22111dcf300ec536cc6a1016634e372dd581bfc8c1965f1ef72025eca7bd27a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://16ff70fe83260431bb761ab05817e149e47d0aa9773fad494524d389e0eb98ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://24bfde81209161c48b816ce80d7a29d805ef660302aa6b8a9350fc545c7f8727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.368892 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.376016 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.376058 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.376068 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.376084 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.376094 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:32Z","lastTransitionTime":"2025-12-09T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.385057 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49bec440-391d-48d9-9bc6-a14f40787067\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://231e33eb8ad573ef8c7345edad3d84a71079fc1f80d66033422174e4d361015f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://231e33eb8ad573ef8c7345edad3d84a71079fc1f80d66033422174e4d361015f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:13:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4hj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.392873 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a8dd347-8a1b-4551-a318-abe7c12df817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pxfmg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.409902 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9bf6317-206d-45f3-b5c6-d074a93429f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://07cb68ad1d7939b032d461e4405874dbea3c0c580d711c636b9c1bc98534ddad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://40690c3e060def2a504e5e96407e7e684a5d65be6a03e3c0c2964c5613ac3a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1501e862b689b4aabc3ad6a8aa5f8021ccdf06efb17e8f190b8a58d3a57b778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://78cdb950caf4d3cbe020e51b49b41823961f04a520144ddc0f055b1ac4015773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://54680e71891f8c4b8d3378c6a2cebfadccf93498ccbb0cf6da1b23063f9256eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.419833 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.428466 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-d24z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80ae74e-7470-4168-bdc1-454fa2137d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://f460a1644c18f7865af7796a312778249adc6d2e94346f6d2c914bd68f28e0d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7glnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d24z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.434431 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-94z8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh9pw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94z8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.441374 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9716f570-4790-4075-a3c3-42114eb7728e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qdhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.450187 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d7eab1-c137-4702-9f40-82ffc645bd99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://843a523bdd75f421c91ce69ed248e099d8a783680b394eca105778950f9d908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.462044 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.471711 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.478232 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.478267 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.478276 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.478291 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.478302 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:32Z","lastTransitionTime":"2025-12-09T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.580479 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.580532 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.580547 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.580568 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.580583 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:32Z","lastTransitionTime":"2025-12-09T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.683013 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.683072 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.683084 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.683100 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.683110 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:32Z","lastTransitionTime":"2025-12-09T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.785404 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.785462 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.785477 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.785497 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.785510 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:32Z","lastTransitionTime":"2025-12-09T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.870557 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.870766 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:32 crc kubenswrapper[5173]: E1209 14:13:32.870940 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.871508 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:32 crc kubenswrapper[5173]: E1209 14:13:32.871597 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:32 crc kubenswrapper[5173]: E1209 14:13:32.871663 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.888607 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.888662 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.888679 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.888704 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:32 crc kubenswrapper[5173]: I1209 14:13:32.888720 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:32Z","lastTransitionTime":"2025-12-09T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.012879 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.012915 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.012927 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.012945 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.012954 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:33Z","lastTransitionTime":"2025-12-09T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.114470 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.114515 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.114526 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.114540 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.114551 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:33Z","lastTransitionTime":"2025-12-09T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.217402 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.217442 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.217451 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.217463 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.217473 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:33Z","lastTransitionTime":"2025-12-09T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.276680 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" event={"ID":"e370197d-9d3c-48ce-8973-ceed80782226","Type":"ContainerStarted","Data":"9a2bdbfffad236c7b28c30bf85c429b736f57f698e23c3707e4d8ff8a0fe6052"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.278904 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerStarted","Data":"3376a0f5a3173a5ec0c06f49feee9428d3596d3ecdaa8ec7fd1a9b782e0c3150"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.278986 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerStarted","Data":"b5e039f291824aa822dd101c3d3c69b2adcedd433290701fc050827ef9923511"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.278997 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerStarted","Data":"86442f9b1ca071f4f9eed36a71a5a1a4955e732d9115098ab6d24b3cd800059c"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.279006 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerStarted","Data":"acdb6f15d5b3a695e73fbb6481f04162b21ec33011cd0f275a5bff46a36788ca"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.279014 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerStarted","Data":"5a539f9e884ee10f4a0bba7a7ce50dd95c423b36c196046435f791e15688e2a0"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.279863 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-94z8j" event={"ID":"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37","Type":"ContainerStarted","Data":"c4420360bd3d1cc1610c686e38b64514a9ee565569ea3859de7246bc2e0a7bf1"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.280800 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-trx55" event={"ID":"9716f570-4790-4075-a3c3-42114eb7728e","Type":"ContainerStarted","Data":"0d8895d901dc5280a3ea184623742d8b415dcd31f1134ad11ee82fc8a71f1da0"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.281959 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"f7e101abf6036df1898e4a5e8a600730f3d9b3a9c01a6dcce26de56433eb5816"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.283619 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" event={"ID":"07ddf926-e4f7-4486-920c-8d83fca5b4da","Type":"ContainerStarted","Data":"8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.288099 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e370197d-9d3c-48ce-8973-ceed80782226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2bdbfffad236c7b28c30bf85c429b736f57f698e23c3707e4d8ff8a0fe6052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mw8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.295685 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lbnx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lbnx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.303465 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ddf926-e4f7-4486-920c-8d83fca5b4da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-srjbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.319883 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66859a4-682c-4aac-9f59-8077ef0987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://22111dcf300ec536cc6a1016634e372dd581bfc8c1965f1ef72025eca7bd27a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://16ff70fe83260431bb761ab05817e149e47d0aa9773fad494524d389e0eb98ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://24bfde81209161c48b816ce80d7a29d805ef660302aa6b8a9350fc545c7f8727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.320907 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.320939 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.320950 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.320965 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.320974 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:33Z","lastTransitionTime":"2025-12-09T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.332330 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.348981 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49bec440-391d-48d9-9bc6-a14f40787067\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://231e33eb8ad573ef8c7345edad3d84a71079fc1f80d66033422174e4d361015f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://231e33eb8ad573ef8c7345edad3d84a71079fc1f80d66033422174e4d361015f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:13:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4hj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.357258 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a8dd347-8a1b-4551-a318-abe7c12df817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pxfmg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.374853 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9bf6317-206d-45f3-b5c6-d074a93429f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://07cb68ad1d7939b032d461e4405874dbea3c0c580d711c636b9c1bc98534ddad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://40690c3e060def2a504e5e96407e7e684a5d65be6a03e3c0c2964c5613ac3a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1501e862b689b4aabc3ad6a8aa5f8021ccdf06efb17e8f190b8a58d3a57b778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://78cdb950caf4d3cbe020e51b49b41823961f04a520144ddc0f055b1ac4015773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://54680e71891f8c4b8d3378c6a2cebfadccf93498ccbb0cf6da1b23063f9256eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.385988 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.397933 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-d24z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80ae74e-7470-4168-bdc1-454fa2137d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://f460a1644c18f7865af7796a312778249adc6d2e94346f6d2c914bd68f28e0d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7glnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d24z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.407765 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-94z8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh9pw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94z8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.415982 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9716f570-4790-4075-a3c3-42114eb7728e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qdhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.422703 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.422739 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.422750 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.422769 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.422780 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:33Z","lastTransitionTime":"2025-12-09T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.424655 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d7eab1-c137-4702-9f40-82ffc645bd99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://843a523bdd75f421c91ce69ed248e099d8a783680b394eca105778950f9d908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.436475 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.446152 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.458278 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f29a9c75-e9f9-4865-b566-af6dce495e92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:13:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:13:10.679235 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:13:10.679403 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:13:10.680393 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3111523604/tls.crt::/tmp/serving-cert-3111523604/tls.key\\\\\\\"\\\\nI1209 14:13:11.231871 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:13:11.234068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:13:11.234094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:13:11.234126 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:13:11.234133 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:13:11.238119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1209 14:13:11.238145 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 14:13:11.238151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238169 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:13:11.238180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:13:11.238183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:13:11.238187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 14:13:11.240654 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:13:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.470453 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72458547-4bad-48ff-be39-8828056b739c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ded317057b16388136754d75b632a51e96153d2e647d0b58e89ac5f3732b778d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b658001d1e245caf6af8b7e926021b65cf14fe05e112bd9f5ef1b3b34dbc397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8f547532154a93a64f89399378cd1ddf1d539f5ccdf318f5358ab3393b1a30ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://004085d552ba1c7640d1262d02bd33a94f35afa0dcfa640e560588a800163b1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.481639 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.491618 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.500401 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d7eab1-c137-4702-9f40-82ffc645bd99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://843a523bdd75f421c91ce69ed248e099d8a783680b394eca105778950f9d908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.511070 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.521973 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.524302 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.524346 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.524391 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.524408 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.524421 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:33Z","lastTransitionTime":"2025-12-09T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.533944 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f29a9c75-e9f9-4865-b566-af6dce495e92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:13:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:13:10.679235 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:13:10.679403 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:13:10.680393 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3111523604/tls.crt::/tmp/serving-cert-3111523604/tls.key\\\\\\\"\\\\nI1209 14:13:11.231871 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:13:11.234068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:13:11.234094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:13:11.234126 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:13:11.234133 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:13:11.238119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1209 14:13:11.238145 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 14:13:11.238151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238169 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:13:11.238180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:13:11.238183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:13:11.238187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 14:13:11.240654 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:13:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.544304 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72458547-4bad-48ff-be39-8828056b739c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ded317057b16388136754d75b632a51e96153d2e647d0b58e89ac5f3732b778d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b658001d1e245caf6af8b7e926021b65cf14fe05e112bd9f5ef1b3b34dbc397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8f547532154a93a64f89399378cd1ddf1d539f5ccdf318f5358ab3393b1a30ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://004085d552ba1c7640d1262d02bd33a94f35afa0dcfa640e560588a800163b1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.554748 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.563663 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.575867 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e370197d-9d3c-48ce-8973-ceed80782226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2bdbfffad236c7b28c30bf85c429b736f57f698e23c3707e4d8ff8a0fe6052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mw8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.583341 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lbnx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lbnx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.591628 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ddf926-e4f7-4486-920c-8d83fca5b4da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-srjbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.601841 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66859a4-682c-4aac-9f59-8077ef0987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://22111dcf300ec536cc6a1016634e372dd581bfc8c1965f1ef72025eca7bd27a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://16ff70fe83260431bb761ab05817e149e47d0aa9773fad494524d389e0eb98ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://24bfde81209161c48b816ce80d7a29d805ef660302aa6b8a9350fc545c7f8727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25dca08dc8d419af5e78f9e368a80743b48798dcd50aee5f0858bc1727a824e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.612552 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.625744 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.625956 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.626022 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.626086 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.626141 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:33Z","lastTransitionTime":"2025-12-09T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.627657 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49bec440-391d-48d9-9bc6-a14f40787067\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://231e33eb8ad573ef8c7345edad3d84a71079fc1f80d66033422174e4d361015f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://231e33eb8ad573ef8c7345edad3d84a71079fc1f80d66033422174e4d361015f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:13:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:13:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5p5kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4hj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.637195 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a8dd347-8a1b-4551-a318-abe7c12df817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tzp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pxfmg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.652301 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9bf6317-206d-45f3-b5c6-d074a93429f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://07cb68ad1d7939b032d461e4405874dbea3c0c580d711c636b9c1bc98534ddad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://40690c3e060def2a504e5e96407e7e684a5d65be6a03e3c0c2964c5613ac3a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1501e862b689b4aabc3ad6a8aa5f8021ccdf06efb17e8f190b8a58d3a57b778\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://78cdb950caf4d3cbe020e51b49b41823961f04a520144ddc0f055b1ac4015773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://54680e71891f8c4b8d3378c6a2cebfadccf93498ccbb0cf6da1b23063f9256eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed11940cfc0c03b0cd7b18b1d7cbe1683725e871a03c6c43986b37be8a6ac784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eacd477eb0e3af4fc175c9fa0420e700ae385a111ecbd41c975c2e3687639d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9d4c76e5aead2cf533b5799e9d8b585203b915594390a713b19c361c77dab9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:12:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:12:03Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.663417 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7e101abf6036df1898e4a5e8a600730f3d9b3a9c01a6dcce26de56433eb5816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.674005 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-d24z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80ae74e-7470-4168-bdc1-454fa2137d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://f460a1644c18f7865af7796a312778249adc6d2e94346f6d2c914bd68f28e0d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7glnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d24z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.681784 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-94z8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4420360bd3d1cc1610c686e38b64514a9ee565569ea3859de7246bc2e0a7bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh9pw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94z8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.690045 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9716f570-4790-4075-a3c3-42114eb7728e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://0d8895d901dc5280a3ea184623742d8b415dcd31f1134ad11ee82fc8a71f1da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qdhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.728319 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.728395 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.728406 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.728426 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.728437 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:33Z","lastTransitionTime":"2025-12-09T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.830645 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.830703 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.830716 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.830733 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.830743 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:33Z","lastTransitionTime":"2025-12-09T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.871221 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:33 crc kubenswrapper[5173]: E1209 14:13:33.871339 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.932492 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.932775 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.932783 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.932796 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:33 crc kubenswrapper[5173]: I1209 14:13:33.932805 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:33Z","lastTransitionTime":"2025-12-09T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.034265 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.034535 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.034607 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.034678 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.034757 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:34Z","lastTransitionTime":"2025-12-09T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.137113 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.137156 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.137168 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.137184 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.137195 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:34Z","lastTransitionTime":"2025-12-09T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.239718 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.239775 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.239794 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.239814 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.239828 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:34Z","lastTransitionTime":"2025-12-09T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.289563 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerStarted","Data":"ddcdfec3ac8cf6eb937f71437b340c84242ca3a95a2a479d3c6ca13b5d99356a"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.291022 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"d8cf0f957522fc23b205062c3110028a29b2031743873b30efa5f25a18f66d81"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.291154 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"e167110b587ac69db00751d48fe053d1d463a7111bd2e7ab86ca2b3681bfeb3f"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.292562 5173 generic.go:358] "Generic (PLEG): container finished" podID="e370197d-9d3c-48ce-8973-ceed80782226" containerID="9a2bdbfffad236c7b28c30bf85c429b736f57f698e23c3707e4d8ff8a0fe6052" exitCode=0 Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.292610 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" event={"ID":"e370197d-9d3c-48ce-8973-ceed80782226","Type":"ContainerDied","Data":"9a2bdbfffad236c7b28c30bf85c429b736f57f698e23c3707e4d8ff8a0fe6052"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.302180 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7e101abf6036df1898e4a5e8a600730f3d9b3a9c01a6dcce26de56433eb5816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.312203 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-d24z7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a80ae74e-7470-4168-bdc1-454fa2137d7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://f460a1644c18f7865af7796a312778249adc6d2e94346f6d2c914bd68f28e0d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7glnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d24z7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.323414 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-94z8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3bf0ff7-fd6f-4e6b-b94f-b6b5b67c8f37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://c4420360bd3d1cc1610c686e38b64514a9ee565569ea3859de7246bc2e0a7bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vh9pw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-94z8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.336369 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-trx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9716f570-4790-4075-a3c3-42114eb7728e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://0d8895d901dc5280a3ea184623742d8b415dcd31f1134ad11ee82fc8a71f1da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qdhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-trx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.346702 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.346738 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.346747 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.346765 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.346786 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:34Z","lastTransitionTime":"2025-12-09T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.354337 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86d7eab1-c137-4702-9f40-82ffc645bd99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://843a523bdd75f421c91ce69ed248e099d8a783680b394eca105778950f9d908f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5352232afbb3c547e95e2f19704e725de9906fff2ae76ca7f228ddf65d71f124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.365667 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8cf0f957522fc23b205062c3110028a29b2031743873b30efa5f25a18f66d81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e167110b587ac69db00751d48fe053d1d463a7111bd2e7ab86ca2b3681bfeb3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.376310 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.388560 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f29a9c75-e9f9-4865-b566-af6dce495e92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T14:13:11Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW1209 14:13:10.679235 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 14:13:10.679403 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1209 14:13:10.680393 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3111523604/tls.crt::/tmp/serving-cert-3111523604/tls.key\\\\\\\"\\\\nI1209 14:13:11.231871 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 14:13:11.234068 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 14:13:11.234094 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 14:13:11.234126 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 14:13:11.234133 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 14:13:11.238119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1209 14:13:11.238145 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 14:13:11.238151 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238169 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 14:13:11.238176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 14:13:11.238180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 14:13:11.238183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 14:13:11.238187 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 14:13:11.240654 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T14:13:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T14:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.399163 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72458547-4bad-48ff-be39-8828056b739c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ded317057b16388136754d75b632a51e96153d2e647d0b58e89ac5f3732b778d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:00Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://3b658001d1e245caf6af8b7e926021b65cf14fe05e112bd9f5ef1b3b34dbc397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:11:59Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8f547532154a93a64f89399378cd1ddf1d539f5ccdf318f5358ab3393b1a30ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://004085d552ba1c7640d1262d02bd33a94f35afa0dcfa640e560588a800163b1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:12:02Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:11:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.409216 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.418645 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.432448 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e370197d-9d3c-48ce-8973-ceed80782226\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a2bdbfffad236c7b28c30bf85c429b736f57f698e23c3707e4d8ff8a0fe6052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5t48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mw8tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.440303 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lbnx5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s95xm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lbnx5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.450078 5173 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07ddf926-e4f7-4486-920c-8d83fca5b4da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T14:13:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T14:13:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdfcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T14:13:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-srjbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.450984 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.451046 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.451057 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.451078 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.451090 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:34Z","lastTransitionTime":"2025-12-09T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.492521 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=16.492490482 podStartE2EDuration="16.492490482s" podCreationTimestamp="2025-12-09 14:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:34.474481091 +0000 UTC m=+97.399763358" watchObservedRunningTime="2025-12-09 14:13:34.492490482 +0000 UTC m=+97.417772729" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.495420 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.495466 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.495489 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.495519 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.495627 5173 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.495723 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.495704972 +0000 UTC m=+113.420987219 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.495743 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.495783 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.495798 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.495821 5173 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.495833 5173 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.495870 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.495860537 +0000 UTC m=+113.421142784 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.495801 5173 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.495905 5173 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.495907 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.495899418 +0000 UTC m=+113.421181675 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.496006 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.495984821 +0000 UTC m=+113.421267068 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.553616 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.553660 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.553671 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.553691 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.553703 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:34Z","lastTransitionTime":"2025-12-09T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.562182 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.562164901 podStartE2EDuration="16.562164901s" podCreationTimestamp="2025-12-09 14:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:34.561792569 +0000 UTC m=+97.487074836" watchObservedRunningTime="2025-12-09 14:13:34.562164901 +0000 UTC m=+97.487447148" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.596840 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.597100 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.597054817 +0000 UTC m=+113.522337064 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.647300 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" podStartSLOduration=72.64726568 podStartE2EDuration="1m12.64726568s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:34.646894548 +0000 UTC m=+97.572176825" watchObservedRunningTime="2025-12-09 14:13:34.64726568 +0000 UTC m=+97.572547927" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.655479 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.655547 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.655561 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.655578 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.655608 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:34Z","lastTransitionTime":"2025-12-09T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.684386 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-d24z7" podStartSLOduration=73.684346213 podStartE2EDuration="1m13.684346213s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:34.683640192 +0000 UTC m=+97.608922469" watchObservedRunningTime="2025-12-09 14:13:34.684346213 +0000 UTC m=+97.609628460" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.694893 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-94z8j" podStartSLOduration=73.694869481 podStartE2EDuration="1m13.694869481s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:34.693939462 +0000 UTC m=+97.619221729" watchObservedRunningTime="2025-12-09 14:13:34.694869481 +0000 UTC m=+97.620151748" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.698419 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs\") pod \"network-metrics-daemon-lbnx5\" (UID: \"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\") " pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.698675 5173 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.698739 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs podName:5d73c2ad-08e4-439f-8c5f-adb67b27ef4b nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.698725751 +0000 UTC m=+113.624007998 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs") pod "network-metrics-daemon-lbnx5" (UID: "5d73c2ad-08e4-439f-8c5f-adb67b27ef4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.726325 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-trx55" podStartSLOduration=72.726303659 podStartE2EDuration="1m12.726303659s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:34.712671605 +0000 UTC m=+97.637953872" watchObservedRunningTime="2025-12-09 14:13:34.726303659 +0000 UTC m=+97.651585906" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.741440 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.74141776 podStartE2EDuration="16.74141776s" podCreationTimestamp="2025-12-09 14:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:34.725898517 +0000 UTC m=+97.651180774" watchObservedRunningTime="2025-12-09 14:13:34.74141776 +0000 UTC m=+97.666700007" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.758489 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.758548 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.758560 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.758582 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.758599 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:34Z","lastTransitionTime":"2025-12-09T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.851016 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=16.850972259 podStartE2EDuration="16.850972259s" podCreationTimestamp="2025-12-09 14:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:34.849886615 +0000 UTC m=+97.775168882" watchObservedRunningTime="2025-12-09 14:13:34.850972259 +0000 UTC m=+97.776254506" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.861824 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.861879 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.861889 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.861906 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.861917 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:34Z","lastTransitionTime":"2025-12-09T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.870089 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.870111 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.870151 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.870227 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.870386 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.870792 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.870908 5173 scope.go:117] "RemoveContainer" containerID="c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d" Dec 09 14:13:34 crc kubenswrapper[5173]: E1209 14:13:34.871091 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.964952 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.965011 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.965024 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.965039 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:34 crc kubenswrapper[5173]: I1209 14:13:34.965071 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:34Z","lastTransitionTime":"2025-12-09T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.066614 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.066655 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.066663 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.066677 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.066686 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:35Z","lastTransitionTime":"2025-12-09T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.169904 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.169942 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.169953 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.169966 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.169977 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:35Z","lastTransitionTime":"2025-12-09T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.272245 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.272308 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.272320 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.272338 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.272371 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:35Z","lastTransitionTime":"2025-12-09T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.296806 5173 generic.go:358] "Generic (PLEG): container finished" podID="e370197d-9d3c-48ce-8973-ceed80782226" containerID="c87fa8d70dec539f19ae056ab41d149c7caa414ca534a512ad71e0cede09a383" exitCode=0 Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.296898 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" event={"ID":"e370197d-9d3c-48ce-8973-ceed80782226","Type":"ContainerDied","Data":"c87fa8d70dec539f19ae056ab41d149c7caa414ca534a512ad71e0cede09a383"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.298218 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerStarted","Data":"717ffac5b6ce2f7525bcfe22fab56f1630c0f641bd13409114fc2361f1304612"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.298250 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerStarted","Data":"7e585a8663ff5e2821ef163759a8486a08d59824ba49fa41e0d15200765ef763"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.346440 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podStartSLOduration=74.346418109 podStartE2EDuration="1m14.346418109s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:35.34515182 +0000 UTC m=+98.270434077" watchObservedRunningTime="2025-12-09 14:13:35.346418109 +0000 UTC m=+98.271700366" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.374344 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.374403 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.374417 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.374434 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.374447 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:35Z","lastTransitionTime":"2025-12-09T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.476792 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.477122 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.477133 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.477147 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.477159 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:35Z","lastTransitionTime":"2025-12-09T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.579876 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.579917 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.579942 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.579975 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.579991 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:35Z","lastTransitionTime":"2025-12-09T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.683033 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.683074 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.683088 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.683104 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.683115 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:35Z","lastTransitionTime":"2025-12-09T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.785085 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.785126 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.785145 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.785165 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.785175 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:35Z","lastTransitionTime":"2025-12-09T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.874672 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:35 crc kubenswrapper[5173]: E1209 14:13:35.874782 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.887556 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.887601 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.887613 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.887627 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.887639 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:35Z","lastTransitionTime":"2025-12-09T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.989747 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.989831 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.989849 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.989872 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:35 crc kubenswrapper[5173]: I1209 14:13:35.989887 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:35Z","lastTransitionTime":"2025-12-09T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.092655 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.092698 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.092709 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.092722 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.092732 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:36Z","lastTransitionTime":"2025-12-09T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.194769 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.194812 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.194824 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.194842 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.194854 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:36Z","lastTransitionTime":"2025-12-09T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.297370 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.297422 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.297433 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.297448 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.297461 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:36Z","lastTransitionTime":"2025-12-09T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.304540 5173 generic.go:358] "Generic (PLEG): container finished" podID="e370197d-9d3c-48ce-8973-ceed80782226" containerID="75131fd730e7d4f225e7df86f5e515745af4638004e819145f83bcfb120084e7" exitCode=0 Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.304610 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" event={"ID":"e370197d-9d3c-48ce-8973-ceed80782226","Type":"ContainerDied","Data":"75131fd730e7d4f225e7df86f5e515745af4638004e819145f83bcfb120084e7"} Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.315796 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerStarted","Data":"958b3c42394f5bda4762c8a20b5ad6dc4de5947214d67c8de6fc2a7258ad7bb7"} Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.399103 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.399149 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.399162 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.399179 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.399190 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:36Z","lastTransitionTime":"2025-12-09T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.501454 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.501506 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.501519 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.501535 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.501545 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:36Z","lastTransitionTime":"2025-12-09T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.605840 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.609537 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.609581 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.609605 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.609621 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:36Z","lastTransitionTime":"2025-12-09T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.711295 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.711333 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.711393 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.711415 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.711425 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:36Z","lastTransitionTime":"2025-12-09T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.813848 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.813897 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.813908 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.813926 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.813937 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:36Z","lastTransitionTime":"2025-12-09T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.869595 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.869614 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.869615 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:36 crc kubenswrapper[5173]: E1209 14:13:36.869729 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:36 crc kubenswrapper[5173]: E1209 14:13:36.869816 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:36 crc kubenswrapper[5173]: E1209 14:13:36.869916 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.915570 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.915629 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.915646 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.915667 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:36 crc kubenswrapper[5173]: I1209 14:13:36.915684 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:36Z","lastTransitionTime":"2025-12-09T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.017520 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.017566 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.017577 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.017592 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.017604 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:37Z","lastTransitionTime":"2025-12-09T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.120886 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.120946 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.120958 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.120976 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.120989 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:37Z","lastTransitionTime":"2025-12-09T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.223622 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.223677 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.223690 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.223707 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.223719 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:37Z","lastTransitionTime":"2025-12-09T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.321638 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"397f07215d5f30a74566d2e97ae8b45a28b4e283c2e0688e5c412b0cf2a16d8c"} Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.325190 5173 generic.go:358] "Generic (PLEG): container finished" podID="e370197d-9d3c-48ce-8973-ceed80782226" containerID="975c050c919a2a58f3d8a70d927c8b16098e086f73c45df6879ad6f1de228866" exitCode=0 Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.325298 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" event={"ID":"e370197d-9d3c-48ce-8973-ceed80782226","Type":"ContainerDied","Data":"975c050c919a2a58f3d8a70d927c8b16098e086f73c45df6879ad6f1de228866"} Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.325397 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.325445 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.325462 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.325484 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.325503 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:37Z","lastTransitionTime":"2025-12-09T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.428840 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.429385 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.429395 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.429410 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.429421 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:37Z","lastTransitionTime":"2025-12-09T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.532176 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.533579 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.537201 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.537302 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.537404 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:37Z","lastTransitionTime":"2025-12-09T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.638652 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.638882 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.638985 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.639057 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.639113 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:37Z","lastTransitionTime":"2025-12-09T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.741258 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.741300 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.741313 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.741331 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.741346 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:37Z","lastTransitionTime":"2025-12-09T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.843776 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.843825 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.843839 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.843858 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.843871 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:37Z","lastTransitionTime":"2025-12-09T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.871419 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:37 crc kubenswrapper[5173]: E1209 14:13:37.871795 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.946158 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.946215 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.946228 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.946246 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:37 crc kubenswrapper[5173]: I1209 14:13:37.946256 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:37Z","lastTransitionTime":"2025-12-09T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.048313 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.048374 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.048386 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.048402 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.048411 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:38Z","lastTransitionTime":"2025-12-09T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.149925 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.149969 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.149980 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.149994 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.150005 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:38Z","lastTransitionTime":"2025-12-09T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.251858 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.251921 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.251939 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.251960 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.251974 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:38Z","lastTransitionTime":"2025-12-09T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.337553 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" event={"ID":"e370197d-9d3c-48ce-8973-ceed80782226","Type":"ContainerStarted","Data":"04a1d278c702e285872cb7b51623e34bb8098e4774b540affe358c8d80f63de7"} Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.343614 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerStarted","Data":"4a2bb8cc7c7e031ab4de5e733d3571412a3459cbc73b22a27811071af61a5d3b"} Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.343940 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.353417 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.353465 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.353476 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.353493 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.353504 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:38Z","lastTransitionTime":"2025-12-09T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.371924 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.421242 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" podStartSLOduration=77.421221349 podStartE2EDuration="1m17.421221349s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:38.393223098 +0000 UTC m=+101.318505365" watchObservedRunningTime="2025-12-09 14:13:38.421221349 +0000 UTC m=+101.346503596" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.455798 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.455846 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.455856 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.455870 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.455880 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:38Z","lastTransitionTime":"2025-12-09T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.558873 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.558920 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.558931 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.558948 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.558960 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:38Z","lastTransitionTime":"2025-12-09T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.660746 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.660801 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.660813 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.660830 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.660844 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:38Z","lastTransitionTime":"2025-12-09T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.762828 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.762895 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.762905 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.762921 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.762935 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:38Z","lastTransitionTime":"2025-12-09T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.804078 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.804117 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.804128 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.804142 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.804153 5173 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T14:13:38Z","lastTransitionTime":"2025-12-09T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.850219 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg"] Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.962004 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.962088 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:38 crc kubenswrapper[5173]: E1209 14:13:38.962559 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.962815 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:38 crc kubenswrapper[5173]: E1209 14:13:38.962951 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.963023 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:38 crc kubenswrapper[5173]: E1209 14:13:38.963084 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.964459 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.966006 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.966053 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 09 14:13:38 crc kubenswrapper[5173]: I1209 14:13:38.966388 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.046935 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77411c61-312f-43b4-a016-1b81cbc2baab-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.047045 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77411c61-312f-43b4-a016-1b81cbc2baab-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.047127 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77411c61-312f-43b4-a016-1b81cbc2baab-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.047256 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77411c61-312f-43b4-a016-1b81cbc2baab-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.047310 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77411c61-312f-43b4-a016-1b81cbc2baab-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.148192 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77411c61-312f-43b4-a016-1b81cbc2baab-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.148257 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77411c61-312f-43b4-a016-1b81cbc2baab-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.148287 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77411c61-312f-43b4-a016-1b81cbc2baab-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.148444 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77411c61-312f-43b4-a016-1b81cbc2baab-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.148650 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77411c61-312f-43b4-a016-1b81cbc2baab-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.148737 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77411c61-312f-43b4-a016-1b81cbc2baab-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.148819 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77411c61-312f-43b4-a016-1b81cbc2baab-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.149623 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77411c61-312f-43b4-a016-1b81cbc2baab-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.160930 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77411c61-312f-43b4-a016-1b81cbc2baab-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.164293 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77411c61-312f-43b4-a016-1b81cbc2baab-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-7n9pg\" (UID: \"77411c61-312f-43b4-a016-1b81cbc2baab\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.278382 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.347182 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" event={"ID":"77411c61-312f-43b4-a016-1b81cbc2baab","Type":"ContainerStarted","Data":"d6e8f5a88e3e3d7a7f6fca9868749f28f4eb74be43b04d31f9536fb51c944d2b"} Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.349954 5173 generic.go:358] "Generic (PLEG): container finished" podID="e370197d-9d3c-48ce-8973-ceed80782226" containerID="04a1d278c702e285872cb7b51623e34bb8098e4774b540affe358c8d80f63de7" exitCode=0 Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.350049 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" event={"ID":"e370197d-9d3c-48ce-8973-ceed80782226","Type":"ContainerDied","Data":"04a1d278c702e285872cb7b51623e34bb8098e4774b540affe358c8d80f63de7"} Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.350847 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.350912 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.376861 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.721693 5173 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.735257 5173 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.870367 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:39 crc kubenswrapper[5173]: E1209 14:13:39.870504 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.974901 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lbnx5"] Dec 09 14:13:39 crc kubenswrapper[5173]: I1209 14:13:39.975063 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:39 crc kubenswrapper[5173]: E1209 14:13:39.976167 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:40 crc kubenswrapper[5173]: I1209 14:13:40.355532 5173 generic.go:358] "Generic (PLEG): container finished" podID="e370197d-9d3c-48ce-8973-ceed80782226" containerID="a6713040a434e218fb2e3dc6fbc0171d542e783339d3b13733532402bf48968c" exitCode=0 Dec 09 14:13:40 crc kubenswrapper[5173]: I1209 14:13:40.355942 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" event={"ID":"e370197d-9d3c-48ce-8973-ceed80782226","Type":"ContainerDied","Data":"a6713040a434e218fb2e3dc6fbc0171d542e783339d3b13733532402bf48968c"} Dec 09 14:13:40 crc kubenswrapper[5173]: I1209 14:13:40.361019 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" event={"ID":"77411c61-312f-43b4-a016-1b81cbc2baab","Type":"ContainerStarted","Data":"e3e095e60e052351632c94d109c871e57d9baf3a73e6932cc3553d61fd9a30dc"} Dec 09 14:13:40 crc kubenswrapper[5173]: I1209 14:13:40.390864 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-7n9pg" podStartSLOduration=79.390845681 podStartE2EDuration="1m19.390845681s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:40.389244892 +0000 UTC m=+103.314527169" watchObservedRunningTime="2025-12-09 14:13:40.390845681 +0000 UTC m=+103.316127928" Dec 09 14:13:40 crc kubenswrapper[5173]: I1209 14:13:40.870401 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:40 crc kubenswrapper[5173]: I1209 14:13:40.870448 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:40 crc kubenswrapper[5173]: E1209 14:13:40.870770 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:40 crc kubenswrapper[5173]: E1209 14:13:40.870924 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:41 crc kubenswrapper[5173]: I1209 14:13:41.370569 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" event={"ID":"e370197d-9d3c-48ce-8973-ceed80782226","Type":"ContainerStarted","Data":"a8bb626b58f3b063de41a5d4fb944be95afcd2463ed606442df07f1dd75d0b27"} Dec 09 14:13:41 crc kubenswrapper[5173]: I1209 14:13:41.392009 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mw8tp" podStartSLOduration=80.39198597 podStartE2EDuration="1m20.39198597s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:41.391420943 +0000 UTC m=+104.316703210" watchObservedRunningTime="2025-12-09 14:13:41.39198597 +0000 UTC m=+104.317268217" Dec 09 14:13:41 crc kubenswrapper[5173]: I1209 14:13:41.874038 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:41 crc kubenswrapper[5173]: E1209 14:13:41.874179 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:41 crc kubenswrapper[5173]: I1209 14:13:41.874236 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:41 crc kubenswrapper[5173]: E1209 14:13:41.874499 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:42 crc kubenswrapper[5173]: I1209 14:13:42.870619 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:42 crc kubenswrapper[5173]: I1209 14:13:42.870729 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:42 crc kubenswrapper[5173]: E1209 14:13:42.870844 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 09 14:13:42 crc kubenswrapper[5173]: E1209 14:13:42.871013 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 09 14:13:43 crc kubenswrapper[5173]: I1209 14:13:43.869607 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:43 crc kubenswrapper[5173]: I1209 14:13:43.869615 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:43 crc kubenswrapper[5173]: E1209 14:13:43.869771 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 09 14:13:43 crc kubenswrapper[5173]: E1209 14:13:43.869873 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lbnx5" podUID="5d73c2ad-08e4-439f-8c5f-adb67b27ef4b" Dec 09 14:13:44 crc kubenswrapper[5173]: I1209 14:13:44.823658 5173 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 09 14:13:44 crc kubenswrapper[5173]: I1209 14:13:44.823763 5173 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 09 14:13:44 crc kubenswrapper[5173]: I1209 14:13:44.858096 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-pkw8g"] Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.826597 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58"] Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.826809 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.826984 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.827318 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.827594 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.829970 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.830105 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.830546 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.831003 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.831144 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.831727 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.920779 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.920970 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.924332 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.924347 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.925107 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.928226 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.928462 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.928670 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.928836 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.930115 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-zhlr7"] Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.934048 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76317343-bf5b-441f-ae79-e09f3d1188cd-serving-cert\") pod \"openshift-config-operator-5777786469-pkw8g\" (UID: \"76317343-bf5b-441f-ae79-e09f3d1188cd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.934106 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvq2g\" (UniqueName: \"kubernetes.io/projected/76317343-bf5b-441f-ae79-e09f3d1188cd-kube-api-access-rvq2g\") pod \"openshift-config-operator-5777786469-pkw8g\" (UID: \"76317343-bf5b-441f-ae79-e09f3d1188cd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.934146 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/76317343-bf5b-441f-ae79-e09f3d1188cd-available-featuregates\") pod \"openshift-config-operator-5777786469-pkw8g\" (UID: \"76317343-bf5b-441f-ae79-e09f3d1188cd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.933484 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.960153 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-q5kgl"] Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.960334 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-zhlr7" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.962677 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.962935 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 09 14:13:46 crc kubenswrapper[5173]: I1209 14:13:46.963947 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.021383 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tpkl8"] Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.021526 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.023609 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.023635 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.024425 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.024644 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.024839 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.024910 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.032476 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.034844 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76317343-bf5b-441f-ae79-e09f3d1188cd-serving-cert\") pod \"openshift-config-operator-5777786469-pkw8g\" (UID: \"76317343-bf5b-441f-ae79-e09f3d1188cd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.034896 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rvq2g\" (UniqueName: \"kubernetes.io/projected/76317343-bf5b-441f-ae79-e09f3d1188cd-kube-api-access-rvq2g\") pod \"openshift-config-operator-5777786469-pkw8g\" (UID: \"76317343-bf5b-441f-ae79-e09f3d1188cd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.034946 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/76317343-bf5b-441f-ae79-e09f3d1188cd-available-featuregates\") pod \"openshift-config-operator-5777786469-pkw8g\" (UID: \"76317343-bf5b-441f-ae79-e09f3d1188cd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.035014 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v4hv\" (UniqueName: \"kubernetes.io/projected/6794662c-7933-4e08-870f-c44892aef039-kube-api-access-2v4hv\") pod \"downloads-747b44746d-zhlr7\" (UID: \"6794662c-7933-4e08-870f-c44892aef039\") " pod="openshift-console/downloads-747b44746d-zhlr7" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.035109 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5263b977-f1d9-4b01-9cd3-25a488d46ac7-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-b2v58\" (UID: \"5263b977-f1d9-4b01-9cd3-25a488d46ac7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.035142 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pp6q\" (UniqueName: \"kubernetes.io/projected/5263b977-f1d9-4b01-9cd3-25a488d46ac7-kube-api-access-9pp6q\") pod \"cluster-samples-operator-6b564684c8-b2v58\" (UID: \"5263b977-f1d9-4b01-9cd3-25a488d46ac7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.036087 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/76317343-bf5b-441f-ae79-e09f3d1188cd-available-featuregates\") pod \"openshift-config-operator-5777786469-pkw8g\" (UID: \"76317343-bf5b-441f-ae79-e09f3d1188cd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.041799 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76317343-bf5b-441f-ae79-e09f3d1188cd-serving-cert\") pod \"openshift-config-operator-5777786469-pkw8g\" (UID: \"76317343-bf5b-441f-ae79-e09f3d1188cd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.060177 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvq2g\" (UniqueName: \"kubernetes.io/projected/76317343-bf5b-441f-ae79-e09f3d1188cd-kube-api-access-rvq2g\") pod \"openshift-config-operator-5777786469-pkw8g\" (UID: \"76317343-bf5b-441f-ae79-e09f3d1188cd\") " pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.136761 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9pp6q\" (UniqueName: \"kubernetes.io/projected/5263b977-f1d9-4b01-9cd3-25a488d46ac7-kube-api-access-9pp6q\") pod \"cluster-samples-operator-6b564684c8-b2v58\" (UID: \"5263b977-f1d9-4b01-9cd3-25a488d46ac7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.138726 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-oauth-serving-cert\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.138892 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-trusted-ca-bundle\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.139046 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-console-config\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.139205 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-console-serving-cert\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.139332 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-console-oauth-config\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.139499 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5263b977-f1d9-4b01-9cd3-25a488d46ac7-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-b2v58\" (UID: \"5263b977-f1d9-4b01-9cd3-25a488d46ac7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.140157 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv9br\" (UniqueName: \"kubernetes.io/projected/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-kube-api-access-tv9br\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.140330 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-service-ca\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.141385 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2v4hv\" (UniqueName: \"kubernetes.io/projected/6794662c-7933-4e08-870f-c44892aef039-kube-api-access-2v4hv\") pod \"downloads-747b44746d-zhlr7\" (UID: \"6794662c-7933-4e08-870f-c44892aef039\") " pod="openshift-console/downloads-747b44746d-zhlr7" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.145709 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5263b977-f1d9-4b01-9cd3-25a488d46ac7-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-b2v58\" (UID: \"5263b977-f1d9-4b01-9cd3-25a488d46ac7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.155679 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pp6q\" (UniqueName: \"kubernetes.io/projected/5263b977-f1d9-4b01-9cd3-25a488d46ac7-kube-api-access-9pp6q\") pod \"cluster-samples-operator-6b564684c8-b2v58\" (UID: \"5263b977-f1d9-4b01-9cd3-25a488d46ac7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.171405 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v4hv\" (UniqueName: \"kubernetes.io/projected/6794662c-7933-4e08-870f-c44892aef039-kube-api-access-2v4hv\") pod \"downloads-747b44746d-zhlr7\" (UID: \"6794662c-7933-4e08-870f-c44892aef039\") " pod="openshift-console/downloads-747b44746d-zhlr7" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.243289 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-oauth-serving-cert\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.243381 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-trusted-ca-bundle\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.243404 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-console-config\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.243432 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-console-serving-cert\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.243464 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-console-oauth-config\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.243505 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tv9br\" (UniqueName: \"kubernetes.io/projected/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-kube-api-access-tv9br\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.243535 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-service-ca\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.244812 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-service-ca\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.244925 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-console-config\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.246186 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-oauth-serving-cert\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.246924 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-trusted-ca-bundle\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.248556 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-console-oauth-config\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.248743 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-console-serving-cert\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.250448 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.264756 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.271115 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv9br\" (UniqueName: \"kubernetes.io/projected/a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b-kube-api-access-tv9br\") pod \"console-64d44f6ddf-q5kgl\" (UID: \"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b\") " pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.275497 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-zhlr7" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.341880 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.377953 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-57k5h"] Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.446218 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-registry-tls\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.446273 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f277bd6-ea48-4729-960f-5a2b97bbfecc-trusted-ca\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.446309 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.446382 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w495\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-kube-api-access-4w495\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.446417 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f277bd6-ea48-4729-960f-5a2b97bbfecc-registry-certificates\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.446440 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f277bd6-ea48-4729-960f-5a2b97bbfecc-ca-trust-extracted\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.446489 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-bound-sa-token\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.446511 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f277bd6-ea48-4729-960f-5a2b97bbfecc-installation-pull-secrets\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: E1209 14:13:47.447283 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:47.947265764 +0000 UTC m=+110.872548011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.544732 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-54cg5"] Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.544867 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.544954 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.546862 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.546998 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f277bd6-ea48-4729-960f-5a2b97bbfecc-registry-certificates\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.547025 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f277bd6-ea48-4729-960f-5a2b97bbfecc-ca-trust-extracted\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.547065 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-bound-sa-token\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.547083 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f277bd6-ea48-4729-960f-5a2b97bbfecc-installation-pull-secrets\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: E1209 14:13:47.547160 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.047133382 +0000 UTC m=+110.972415629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.547447 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-registry-tls\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.547506 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f277bd6-ea48-4729-960f-5a2b97bbfecc-trusted-ca\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.547540 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.547627 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4w495\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-kube-api-access-4w495\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: E1209 14:13:47.548266 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.048253446 +0000 UTC m=+110.973535693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.548324 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f277bd6-ea48-4729-960f-5a2b97bbfecc-registry-certificates\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.548970 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.548978 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f277bd6-ea48-4729-960f-5a2b97bbfecc-ca-trust-extracted\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.548989 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.549273 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.549427 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.549468 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.549543 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.550786 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.551066 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.551314 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.551547 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.551693 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.552939 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.553158 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.553504 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p"] Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.553994 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.562672 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f277bd6-ea48-4729-960f-5a2b97bbfecc-installation-pull-secrets\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.563973 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-registry-tls\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.566691 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.570266 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.571615 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.571657 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.572155 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.572162 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.572477 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.574745 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.575790 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.576025 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-lh94q"] Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.576133 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.577091 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-bound-sa-token\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.579171 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.579589 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.579633 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.579717 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.579758 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.579871 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.580971 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w495\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-kube-api-access-4w495\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.583133 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f277bd6-ea48-4729-960f-5a2b97bbfecc-trusted-ca\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.584407 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 09 14:13:47 crc kubenswrapper[5173]: W1209 14:13:47.587233 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8f67fe4_59ba_4391_aa5d_ba4a8e1fe68b.slice/crio-0f45c9af2c4df8e9796bef7b4b0b5eb18f1128e6111cf4b3e96dec77b70d6e0e WatchSource:0}: Error finding container 0f45c9af2c4df8e9796bef7b4b0b5eb18f1128e6111cf4b3e96dec77b70d6e0e: Status 404 returned error can't find the container with id 0f45c9af2c4df8e9796bef7b4b0b5eb18f1128e6111cf4b3e96dec77b70d6e0e Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.648853 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:47 crc kubenswrapper[5173]: E1209 14:13:47.649014 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.148991232 +0000 UTC m=+111.074273479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.649074 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4751d5a1-9958-4f4f-aa73-a94b587a09b7-serving-cert\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.649113 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-config\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.649137 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-audit\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.649155 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0917873e-8059-49a3-aec4-f2b5152fc356-audit-dir\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.649234 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0917873e-8059-49a3-aec4-f2b5152fc356-etcd-client\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.649305 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.649615 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjfkn\" (UniqueName: \"kubernetes.io/projected/4751d5a1-9958-4f4f-aa73-a94b587a09b7-kube-api-access-rjfkn\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.649850 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.649894 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58f4e37-afc4-442b-b93e-87303f0dbdb6-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.649910 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.649929 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a58f4e37-afc4-442b-b93e-87303f0dbdb6-serving-cert\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.649993 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrwc\" (UniqueName: \"kubernetes.io/projected/a58f4e37-afc4-442b-b93e-87303f0dbdb6-kube-api-access-mxrwc\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.650037 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a58f4e37-afc4-442b-b93e-87303f0dbdb6-config\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.650308 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.650385 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0917873e-8059-49a3-aec4-f2b5152fc356-encryption-config\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.650409 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58f4e37-afc4-442b-b93e-87303f0dbdb6-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: E1209 14:13:47.650642 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.150633843 +0000 UTC m=+111.075916090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.651015 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-client-ca\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.651083 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0917873e-8059-49a3-aec4-f2b5152fc356-serving-cert\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.651120 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8kds\" (UniqueName: \"kubernetes.io/projected/0917873e-8059-49a3-aec4-f2b5152fc356-kube-api-access-q8kds\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.651180 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0917873e-8059-49a3-aec4-f2b5152fc356-node-pullsecrets\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.651222 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-image-import-ca\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.651300 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-config\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.656654 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4751d5a1-9958-4f4f-aa73-a94b587a09b7-tmp\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.684411 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9"] Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.684550 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.688759 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.688960 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.689124 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.689462 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.689603 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.689739 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.690019 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.690218 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.757417 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:47 crc kubenswrapper[5173]: E1209 14:13:47.757617 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.257588632 +0000 UTC m=+111.182870879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.757806 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4d271279-fdf8-48d7-b1d8-1b05fee604d4-tmp-dir\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.757844 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4751d5a1-9958-4f4f-aa73-a94b587a09b7-serving-cert\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.757861 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4d271279-fdf8-48d7-b1d8-1b05fee604d4-etcd-ca\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.757888 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-config\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.757910 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-audit\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.757926 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0917873e-8059-49a3-aec4-f2b5152fc356-audit-dir\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.757942 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0917873e-8059-49a3-aec4-f2b5152fc356-etcd-client\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.757965 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.757992 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rjfkn\" (UniqueName: \"kubernetes.io/projected/4751d5a1-9958-4f4f-aa73-a94b587a09b7-kube-api-access-rjfkn\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758011 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d271279-fdf8-48d7-b1d8-1b05fee604d4-serving-cert\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758029 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h52b\" (UniqueName: \"kubernetes.io/projected/4d271279-fdf8-48d7-b1d8-1b05fee604d4-kube-api-access-8h52b\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758044 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758063 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58f4e37-afc4-442b-b93e-87303f0dbdb6-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758080 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758097 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a58f4e37-afc4-442b-b93e-87303f0dbdb6-serving-cert\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758117 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mxrwc\" (UniqueName: \"kubernetes.io/projected/a58f4e37-afc4-442b-b93e-87303f0dbdb6-kube-api-access-mxrwc\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758140 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a58f4e37-afc4-442b-b93e-87303f0dbdb6-config\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758171 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758197 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0917873e-8059-49a3-aec4-f2b5152fc356-encryption-config\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758214 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58f4e37-afc4-442b-b93e-87303f0dbdb6-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758230 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-client-ca\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758244 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4d271279-fdf8-48d7-b1d8-1b05fee604d4-etcd-client\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758274 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0917873e-8059-49a3-aec4-f2b5152fc356-serving-cert\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758292 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q8kds\" (UniqueName: \"kubernetes.io/projected/0917873e-8059-49a3-aec4-f2b5152fc356-kube-api-access-q8kds\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758311 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0917873e-8059-49a3-aec4-f2b5152fc356-node-pullsecrets\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758328 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-image-import-ca\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758365 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-config\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758380 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d271279-fdf8-48d7-b1d8-1b05fee604d4-config\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758397 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4751d5a1-9958-4f4f-aa73-a94b587a09b7-tmp\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.758440 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d271279-fdf8-48d7-b1d8-1b05fee604d4-etcd-service-ca\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.759176 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-config\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: E1209 14:13:47.759484 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.25947189 +0000 UTC m=+111.184754137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.759654 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-audit\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.759698 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0917873e-8059-49a3-aec4-f2b5152fc356-audit-dir\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.759928 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0917873e-8059-49a3-aec4-f2b5152fc356-node-pullsecrets\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.761159 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.761396 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-image-import-ca\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.762049 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-config\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.762114 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-client-ca\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.762337 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4751d5a1-9958-4f4f-aa73-a94b587a09b7-tmp\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.762580 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58f4e37-afc4-442b-b93e-87303f0dbdb6-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.764939 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58f4e37-afc4-442b-b93e-87303f0dbdb6-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.765709 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0917873e-8059-49a3-aec4-f2b5152fc356-serving-cert\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.765721 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a58f4e37-afc4-442b-b93e-87303f0dbdb6-config\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.766170 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.766335 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0917873e-8059-49a3-aec4-f2b5152fc356-etcd-client\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.766771 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a58f4e37-afc4-442b-b93e-87303f0dbdb6-serving-cert\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.766783 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4751d5a1-9958-4f4f-aa73-a94b587a09b7-serving-cert\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.767184 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0917873e-8059-49a3-aec4-f2b5152fc356-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.767910 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0917873e-8059-49a3-aec4-f2b5152fc356-encryption-config\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.773237 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-xtwzt"] Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.773397 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.776689 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.776735 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.778751 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.779781 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxrwc\" (UniqueName: \"kubernetes.io/projected/a58f4e37-afc4-442b-b93e-87303f0dbdb6-kube-api-access-mxrwc\") pod \"authentication-operator-7f5c659b84-ftc7p\" (UID: \"a58f4e37-afc4-442b-b93e-87303f0dbdb6\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.780021 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.782451 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8kds\" (UniqueName: \"kubernetes.io/projected/0917873e-8059-49a3-aec4-f2b5152fc356-kube-api-access-q8kds\") pod \"apiserver-9ddfb9f55-57k5h\" (UID: \"0917873e-8059-49a3-aec4-f2b5152fc356\") " pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.784485 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjfkn\" (UniqueName: \"kubernetes.io/projected/4751d5a1-9958-4f4f-aa73-a94b587a09b7-kube-api-access-rjfkn\") pod \"controller-manager-65b6cccf98-54cg5\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.786430 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.859047 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.859208 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb0c4171-4c7a-4d9c-a467-47895e7dca09-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-dj8z9\" (UID: \"eb0c4171-4c7a-4d9c-a467-47895e7dca09\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.859245 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c4171-4c7a-4d9c-a467-47895e7dca09-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-dj8z9\" (UID: \"eb0c4171-4c7a-4d9c-a467-47895e7dca09\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: E1209 14:13:47.859281 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.359247405 +0000 UTC m=+111.284529652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.859437 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4d271279-fdf8-48d7-b1d8-1b05fee604d4-etcd-client\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.859527 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d271279-fdf8-48d7-b1d8-1b05fee604d4-config\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.859573 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d271279-fdf8-48d7-b1d8-1b05fee604d4-etcd-service-ca\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.859604 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4d271279-fdf8-48d7-b1d8-1b05fee604d4-tmp-dir\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.859633 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4d271279-fdf8-48d7-b1d8-1b05fee604d4-etcd-ca\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.859653 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eb0c4171-4c7a-4d9c-a467-47895e7dca09-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-dj8z9\" (UID: \"eb0c4171-4c7a-4d9c-a467-47895e7dca09\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.859711 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgkjj\" (UniqueName: \"kubernetes.io/projected/eb0c4171-4c7a-4d9c-a467-47895e7dca09-kube-api-access-bgkjj\") pod \"ingress-operator-6b9cb4dbcf-dj8z9\" (UID: \"eb0c4171-4c7a-4d9c-a467-47895e7dca09\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.859750 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d271279-fdf8-48d7-b1d8-1b05fee604d4-serving-cert\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.859768 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8h52b\" (UniqueName: \"kubernetes.io/projected/4d271279-fdf8-48d7-b1d8-1b05fee604d4-kube-api-access-8h52b\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.861633 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4d271279-fdf8-48d7-b1d8-1b05fee604d4-tmp-dir\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.861645 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d271279-fdf8-48d7-b1d8-1b05fee604d4-config\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.861905 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d271279-fdf8-48d7-b1d8-1b05fee604d4-etcd-service-ca\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.862155 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4d271279-fdf8-48d7-b1d8-1b05fee604d4-etcd-ca\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.864982 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4d271279-fdf8-48d7-b1d8-1b05fee604d4-etcd-client\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.869732 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d271279-fdf8-48d7-b1d8-1b05fee604d4-serving-cert\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.870608 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.880207 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h52b\" (UniqueName: \"kubernetes.io/projected/4d271279-fdf8-48d7-b1d8-1b05fee604d4-kube-api-access-8h52b\") pod \"etcd-operator-69b85846b6-lh94q\" (UID: \"4d271279-fdf8-48d7-b1d8-1b05fee604d4\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.890096 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.891881 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.891840 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.893420 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.893464 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.895528 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8"] Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.914475 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.919986 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.960864 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eb0c4171-4c7a-4d9c-a467-47895e7dca09-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-dj8z9\" (UID: \"eb0c4171-4c7a-4d9c-a467-47895e7dca09\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.960938 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bgkjj\" (UniqueName: \"kubernetes.io/projected/eb0c4171-4c7a-4d9c-a467-47895e7dca09-kube-api-access-bgkjj\") pod \"ingress-operator-6b9cb4dbcf-dj8z9\" (UID: \"eb0c4171-4c7a-4d9c-a467-47895e7dca09\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.960985 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/07abd9d6-5952-41d9-aea4-ae02adf03b84-tmp-dir\") pod \"dns-operator-799b87ffcd-xtwzt\" (UID: \"07abd9d6-5952-41d9-aea4-ae02adf03b84\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.961131 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb0c4171-4c7a-4d9c-a467-47895e7dca09-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-dj8z9\" (UID: \"eb0c4171-4c7a-4d9c-a467-47895e7dca09\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.961179 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.961205 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c4171-4c7a-4d9c-a467-47895e7dca09-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-dj8z9\" (UID: \"eb0c4171-4c7a-4d9c-a467-47895e7dca09\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.961228 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07abd9d6-5952-41d9-aea4-ae02adf03b84-metrics-tls\") pod \"dns-operator-799b87ffcd-xtwzt\" (UID: \"07abd9d6-5952-41d9-aea4-ae02adf03b84\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.961284 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkgfj\" (UniqueName: \"kubernetes.io/projected/07abd9d6-5952-41d9-aea4-ae02adf03b84-kube-api-access-dkgfj\") pod \"dns-operator-799b87ffcd-xtwzt\" (UID: \"07abd9d6-5952-41d9-aea4-ae02adf03b84\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" Dec 09 14:13:47 crc kubenswrapper[5173]: E1209 14:13:47.961683 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.461653703 +0000 UTC m=+111.386936130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.962924 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c4171-4c7a-4d9c-a467-47895e7dca09-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-dj8z9\" (UID: \"eb0c4171-4c7a-4d9c-a467-47895e7dca09\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.969407 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eb0c4171-4c7a-4d9c-a467-47895e7dca09-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-dj8z9\" (UID: \"eb0c4171-4c7a-4d9c-a467-47895e7dca09\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.979990 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgkjj\" (UniqueName: \"kubernetes.io/projected/eb0c4171-4c7a-4d9c-a467-47895e7dca09-kube-api-access-bgkjj\") pod \"ingress-operator-6b9cb4dbcf-dj8z9\" (UID: \"eb0c4171-4c7a-4d9c-a467-47895e7dca09\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.981668 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb0c4171-4c7a-4d9c-a467-47895e7dca09-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-dj8z9\" (UID: \"eb0c4171-4c7a-4d9c-a467-47895e7dca09\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:47 crc kubenswrapper[5173]: I1209 14:13:47.996826 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.061993 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.062331 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/07abd9d6-5952-41d9-aea4-ae02adf03b84-tmp-dir\") pod \"dns-operator-799b87ffcd-xtwzt\" (UID: \"07abd9d6-5952-41d9-aea4-ae02adf03b84\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" Dec 09 14:13:48 crc kubenswrapper[5173]: E1209 14:13:48.062380 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.562337817 +0000 UTC m=+111.487620064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.062487 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.062516 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07abd9d6-5952-41d9-aea4-ae02adf03b84-metrics-tls\") pod \"dns-operator-799b87ffcd-xtwzt\" (UID: \"07abd9d6-5952-41d9-aea4-ae02adf03b84\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.062567 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dkgfj\" (UniqueName: \"kubernetes.io/projected/07abd9d6-5952-41d9-aea4-ae02adf03b84-kube-api-access-dkgfj\") pod \"dns-operator-799b87ffcd-xtwzt\" (UID: \"07abd9d6-5952-41d9-aea4-ae02adf03b84\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.062809 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/07abd9d6-5952-41d9-aea4-ae02adf03b84-tmp-dir\") pod \"dns-operator-799b87ffcd-xtwzt\" (UID: \"07abd9d6-5952-41d9-aea4-ae02adf03b84\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" Dec 09 14:13:48 crc kubenswrapper[5173]: E1209 14:13:48.063588 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.563580435 +0000 UTC m=+111.488862682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.067890 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/07abd9d6-5952-41d9-aea4-ae02adf03b84-metrics-tls\") pod \"dns-operator-799b87ffcd-xtwzt\" (UID: \"07abd9d6-5952-41d9-aea4-ae02adf03b84\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.086242 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkgfj\" (UniqueName: \"kubernetes.io/projected/07abd9d6-5952-41d9-aea4-ae02adf03b84-kube-api-access-dkgfj\") pod \"dns-operator-799b87ffcd-xtwzt\" (UID: \"07abd9d6-5952-41d9-aea4-ae02adf03b84\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" Dec 09 14:13:48 crc kubenswrapper[5173]: W1209 14:13:48.093202 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0917873e_8059_49a3_aec4_f2b5152fc356.slice/crio-7046cbf4551987c96330ac50643a0370428fc5e32ef6e6b9e763b4fa3f1b1ca2 WatchSource:0}: Error finding container 7046cbf4551987c96330ac50643a0370428fc5e32ef6e6b9e763b4fa3f1b1ca2: Status 404 returned error can't find the container with id 7046cbf4551987c96330ac50643a0370428fc5e32ef6e6b9e763b4fa3f1b1ca2 Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.098619 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.163702 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:48 crc kubenswrapper[5173]: E1209 14:13:48.163995 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.663978861 +0000 UTC m=+111.589261108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:48 crc kubenswrapper[5173]: W1209 14:13:48.190156 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda58f4e37_afc4_442b_b93e_87303f0dbdb6.slice/crio-a2583ebf195645b7b5ad6f5858799de6b4a6be9e008e4ae11ce52277f634dc54 WatchSource:0}: Error finding container a2583ebf195645b7b5ad6f5858799de6b4a6be9e008e4ae11ce52277f634dc54: Status 404 returned error can't find the container with id a2583ebf195645b7b5ad6f5858799de6b4a6be9e008e4ae11ce52277f634dc54 Dec 09 14:13:48 crc kubenswrapper[5173]: W1209 14:13:48.198308 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4751d5a1_9958_4f4f_aa73_a94b587a09b7.slice/crio-5b50cb46dfb5e44f82920aed495d8438c2d76d2b5d1569f4f7f1f7e9bf30e46b WatchSource:0}: Error finding container 5b50cb46dfb5e44f82920aed495d8438c2d76d2b5d1569f4f7f1f7e9bf30e46b: Status 404 returned error can't find the container with id 5b50cb46dfb5e44f82920aed495d8438c2d76d2b5d1569f4f7f1f7e9bf30e46b Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.210994 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" Dec 09 14:13:48 crc kubenswrapper[5173]: W1209 14:13:48.224841 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d271279_fdf8_48d7_b1d8_1b05fee604d4.slice/crio-034485b5cd12b3c5c2745b4348312e11ea88de8db716778202ca32f68aadd55e WatchSource:0}: Error finding container 034485b5cd12b3c5c2745b4348312e11ea88de8db716778202ca32f68aadd55e: Status 404 returned error can't find the container with id 034485b5cd12b3c5c2745b4348312e11ea88de8db716778202ca32f68aadd55e Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.265002 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:48 crc kubenswrapper[5173]: E1209 14:13:48.265318 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.765303593 +0000 UTC m=+111.690585840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:48 crc kubenswrapper[5173]: W1209 14:13:48.303172 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb0c4171_4c7a_4d9c_a467_47895e7dca09.slice/crio-94e85b817f73448145dc1d04c059d2bb0de2b062a68f2ef4ff7c588702c713c7 WatchSource:0}: Error finding container 94e85b817f73448145dc1d04c059d2bb0de2b062a68f2ef4ff7c588702c713c7: Status 404 returned error can't find the container with id 94e85b817f73448145dc1d04c059d2bb0de2b062a68f2ef4ff7c588702c713c7 Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.366330 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:48 crc kubenswrapper[5173]: E1209 14:13:48.366860 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.866827233 +0000 UTC m=+111.792109480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:48 crc kubenswrapper[5173]: W1209 14:13:48.410582 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07abd9d6_5952_41d9_aea4_ae02adf03b84.slice/crio-26868a8101aec4d9c03d7f99cccf28fdb5aca93463375f51bd08ab0126ee7677 WatchSource:0}: Error finding container 26868a8101aec4d9c03d7f99cccf28fdb5aca93463375f51bd08ab0126ee7677: Status 404 returned error can't find the container with id 26868a8101aec4d9c03d7f99cccf28fdb5aca93463375f51bd08ab0126ee7677 Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.468113 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:48 crc kubenswrapper[5173]: E1209 14:13:48.468592 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:48.96857113 +0000 UTC m=+111.893853387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.568960 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:48 crc kubenswrapper[5173]: E1209 14:13:48.569124 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:49.069100129 +0000 UTC m=+111.994382386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.569285 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:48 crc kubenswrapper[5173]: E1209 14:13:48.569679 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:49.069666917 +0000 UTC m=+111.994949184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.670008 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:48 crc kubenswrapper[5173]: E1209 14:13:48.670217 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:49.170190666 +0000 UTC m=+112.095472913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.762628 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-znppb"] Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.762782 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.767114 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.767241 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.767347 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.767469 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.767679 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.767758 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.773229 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:48 crc kubenswrapper[5173]: E1209 14:13:48.773565 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:49.273551682 +0000 UTC m=+112.198833919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.873992 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.874117 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlnpl\" (UniqueName: \"kubernetes.io/projected/683a6416-7033-4896-9e1e-be8b31f74d38-kube-api-access-hlnpl\") pod \"machine-approver-54c688565-dzzb8\" (UID: \"683a6416-7033-4896-9e1e-be8b31f74d38\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.874165 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683a6416-7033-4896-9e1e-be8b31f74d38-config\") pod \"machine-approver-54c688565-dzzb8\" (UID: \"683a6416-7033-4896-9e1e-be8b31f74d38\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.874194 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/683a6416-7033-4896-9e1e-be8b31f74d38-machine-approver-tls\") pod \"machine-approver-54c688565-dzzb8\" (UID: \"683a6416-7033-4896-9e1e-be8b31f74d38\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:48 crc kubenswrapper[5173]: E1209 14:13:48.874318 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:49.374273407 +0000 UTC m=+112.299555654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.874392 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/683a6416-7033-4896-9e1e-be8b31f74d38-auth-proxy-config\") pod \"machine-approver-54c688565-dzzb8\" (UID: \"683a6416-7033-4896-9e1e-be8b31f74d38\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.975194 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683a6416-7033-4896-9e1e-be8b31f74d38-config\") pod \"machine-approver-54c688565-dzzb8\" (UID: \"683a6416-7033-4896-9e1e-be8b31f74d38\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.975246 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/683a6416-7033-4896-9e1e-be8b31f74d38-machine-approver-tls\") pod \"machine-approver-54c688565-dzzb8\" (UID: \"683a6416-7033-4896-9e1e-be8b31f74d38\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.975290 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/683a6416-7033-4896-9e1e-be8b31f74d38-auth-proxy-config\") pod \"machine-approver-54c688565-dzzb8\" (UID: \"683a6416-7033-4896-9e1e-be8b31f74d38\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.975320 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hlnpl\" (UniqueName: \"kubernetes.io/projected/683a6416-7033-4896-9e1e-be8b31f74d38-kube-api-access-hlnpl\") pod \"machine-approver-54c688565-dzzb8\" (UID: \"683a6416-7033-4896-9e1e-be8b31f74d38\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.975342 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:48 crc kubenswrapper[5173]: E1209 14:13:48.975623 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:49.475609302 +0000 UTC m=+112.400891549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.976309 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/683a6416-7033-4896-9e1e-be8b31f74d38-auth-proxy-config\") pod \"machine-approver-54c688565-dzzb8\" (UID: \"683a6416-7033-4896-9e1e-be8b31f74d38\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.977586 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/683a6416-7033-4896-9e1e-be8b31f74d38-config\") pod \"machine-approver-54c688565-dzzb8\" (UID: \"683a6416-7033-4896-9e1e-be8b31f74d38\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.982103 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/683a6416-7033-4896-9e1e-be8b31f74d38-machine-approver-tls\") pod \"machine-approver-54c688565-dzzb8\" (UID: \"683a6416-7033-4896-9e1e-be8b31f74d38\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:48 crc kubenswrapper[5173]: I1209 14:13:48.992058 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlnpl\" (UniqueName: \"kubernetes.io/projected/683a6416-7033-4896-9e1e-be8b31f74d38-kube-api-access-hlnpl\") pod \"machine-approver-54c688565-dzzb8\" (UID: \"683a6416-7033-4896-9e1e-be8b31f74d38\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.076483 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.076786 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:49.576737109 +0000 UTC m=+112.502019356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.080806 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.178305 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.178922 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:49.678886598 +0000 UTC m=+112.604168855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.279766 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.279961 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:49.779934093 +0000 UTC m=+112.705216340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.280088 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.280419 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:49.780406428 +0000 UTC m=+112.705688675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.380814 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.381046 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:49.88102996 +0000 UTC m=+112.806312207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.413036 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z"] Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.413211 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.418048 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.418275 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.418466 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.418646 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.419558 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.419994 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.419999 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.420632 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.420643 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.420914 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.421555 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.421666 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.421708 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.432956 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.433432 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.441666 5173 generic.go:358] "Generic (PLEG): container finished" podID="76317343-bf5b-441f-ae79-e09f3d1188cd" containerID="236cf3b947f87fdab2ab5ef79d3412dbfc18f6efbb7b82ee6d006e38aab20398" exitCode=0 Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482408 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482464 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482492 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482554 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-audit-policies\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482578 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482604 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482631 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb78d03e-40d5-4c32-9f47-49a596f9b55a-audit-dir\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482651 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482685 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482724 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482752 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482777 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482806 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482827 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.482850 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5zgw\" (UniqueName: \"kubernetes.io/projected/fb78d03e-40d5-4c32-9f47-49a596f9b55a-kube-api-access-d5zgw\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.483165 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:49.983152377 +0000 UTC m=+112.908434624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.551263 5173 scope.go:117] "RemoveContainer" containerID="c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d" Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.551551 5173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.583361 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.583543 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.083518521 +0000 UTC m=+113.008800768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.583615 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.583661 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.583708 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.583732 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.583757 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d5zgw\" (UniqueName: \"kubernetes.io/projected/fb78d03e-40d5-4c32-9f47-49a596f9b55a-kube-api-access-d5zgw\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.583818 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.584085 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.084078279 +0000 UTC m=+113.009360516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.584288 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.585648 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.585923 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-audit-policies\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.585956 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.585986 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.586014 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb78d03e-40d5-4c32-9f47-49a596f9b55a-audit-dir\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.586040 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.586088 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.586809 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb78d03e-40d5-4c32-9f47-49a596f9b55a-audit-dir\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.586979 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.590585 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.595138 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.595619 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.596098 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.596226 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-audit-policies\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.597042 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.597265 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.597676 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.598023 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.599534 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.600280 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.601175 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.604402 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5zgw\" (UniqueName: \"kubernetes.io/projected/fb78d03e-40d5-4c32-9f47-49a596f9b55a-kube-api-access-d5zgw\") pod \"oauth-openshift-66458b6674-znppb\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.669534 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" event={"ID":"a58f4e37-afc4-442b-b93e-87303f0dbdb6","Type":"ContainerStarted","Data":"a2583ebf195645b7b5ad6f5858799de6b4a6be9e008e4ae11ce52277f634dc54"} Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.669597 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk"] Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.669856 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.672849 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.672990 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.687969 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.688171 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.188142318 +0000 UTC m=+113.113424565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.688784 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.689146 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.189138609 +0000 UTC m=+113.114420856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.742573 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.790036 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.790268 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19dbfec3-c944-4ab4-9b21-a1ac67840543-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.790303 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/19dbfec3-c944-4ab4-9b21-a1ac67840543-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.790325 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19dbfec3-c944-4ab4-9b21-a1ac67840543-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.790401 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.290331429 +0000 UTC m=+113.215613716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.790549 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lrrg\" (UniqueName: \"kubernetes.io/projected/19dbfec3-c944-4ab4-9b21-a1ac67840543-kube-api-access-9lrrg\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.790647 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/19dbfec3-c944-4ab4-9b21-a1ac67840543-tmp\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.790703 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.790729 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/19dbfec3-c944-4ab4-9b21-a1ac67840543-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.790970 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.290962498 +0000 UTC m=+113.216244745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.892315 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.893167 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19dbfec3-c944-4ab4-9b21-a1ac67840543-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.893199 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/19dbfec3-c944-4ab4-9b21-a1ac67840543-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.893225 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19dbfec3-c944-4ab4-9b21-a1ac67840543-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.893256 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9lrrg\" (UniqueName: \"kubernetes.io/projected/19dbfec3-c944-4ab4-9b21-a1ac67840543-kube-api-access-9lrrg\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.893298 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/19dbfec3-c944-4ab4-9b21-a1ac67840543-tmp\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.893343 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/19dbfec3-c944-4ab4-9b21-a1ac67840543-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.893850 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.393805529 +0000 UTC m=+113.319087956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.893929 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/19dbfec3-c944-4ab4-9b21-a1ac67840543-tmp\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.894025 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/19dbfec3-c944-4ab4-9b21-a1ac67840543-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.895841 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19dbfec3-c944-4ab4-9b21-a1ac67840543-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.904288 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/19dbfec3-c944-4ab4-9b21-a1ac67840543-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.912101 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19dbfec3-c944-4ab4-9b21-a1ac67840543-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.913318 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lrrg\" (UniqueName: \"kubernetes.io/projected/19dbfec3-c944-4ab4-9b21-a1ac67840543-kube-api-access-9lrrg\") pod \"cluster-image-registry-operator-86c45576b9-2wr9z\" (UID: \"19dbfec3-c944-4ab4-9b21-a1ac67840543\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.988862 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" Dec 09 14:13:49 crc kubenswrapper[5173]: I1209 14:13:49.994079 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:49 crc kubenswrapper[5173]: E1209 14:13:49.994589 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.494569105 +0000 UTC m=+113.419851362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.095298 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:50 crc kubenswrapper[5173]: E1209 14:13:50.095495 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.595464826 +0000 UTC m=+113.520747073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:50 crc kubenswrapper[5173]: W1209 14:13:50.167714 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19dbfec3_c944_4ab4_9b21_a1ac67840543.slice/crio-237431e2a6df03a4fca101b2afc5bbc733f878aeebe3349b23611782303395fc WatchSource:0}: Error finding container 237431e2a6df03a4fca101b2afc5bbc733f878aeebe3349b23611782303395fc: Status 404 returned error can't find the container with id 237431e2a6df03a4fca101b2afc5bbc733f878aeebe3349b23611782303395fc Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.196743 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:50 crc kubenswrapper[5173]: E1209 14:13:50.197172 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.69715421 +0000 UTC m=+113.622436457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.252836 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.254963 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.254976 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.255009 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.255110 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.255298 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.262795 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv"] Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.298473 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:50 crc kubenswrapper[5173]: E1209 14:13:50.298587 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.798563316 +0000 UTC m=+113.723845563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.298768 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:50 crc kubenswrapper[5173]: E1209 14:13:50.299086 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.799079283 +0000 UTC m=+113.724361530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.376826 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498"] Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.399790 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.399955 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-n4pnk\" (UID: \"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.399997 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-n4pnk\" (UID: \"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: E1209 14:13:50.400035 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.900011594 +0000 UTC m=+113.825293841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.400091 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqwzr\" (UniqueName: \"kubernetes.io/projected/5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c-kube-api-access-pqwzr\") pod \"openshift-controller-manager-operator-686468bdd5-n4pnk\" (UID: \"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.400157 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.400197 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c-config\") pod \"openshift-controller-manager-operator-686468bdd5-n4pnk\" (UID: \"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: E1209 14:13:50.400578 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:50.900567532 +0000 UTC m=+113.825849779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.451630 5173 generic.go:358] "Generic (PLEG): container finished" podID="0917873e-8059-49a3-aec4-f2b5152fc356" containerID="5ddde056f3ac35b0c8b66cffd71d82c7aeecd2d0cb97f242816bdfc0e8a1fee7" exitCode=0 Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.500750 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:50 crc kubenswrapper[5173]: E1209 14:13:50.500910 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.000888374 +0000 UTC m=+113.926170621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501079 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501122 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-audit-policies\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501148 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c-config\") pod \"openshift-controller-manager-operator-686468bdd5-n4pnk\" (UID: \"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501172 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501200 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501429 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-etcd-serving-ca\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501478 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-n4pnk\" (UID: \"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501533 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-encryption-config\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501570 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-n4pnk\" (UID: \"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501590 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501618 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pqwzr\" (UniqueName: \"kubernetes.io/projected/5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c-kube-api-access-pqwzr\") pod \"openshift-controller-manager-operator-686468bdd5-n4pnk\" (UID: \"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501638 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-trusted-ca-bundle\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501659 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-serving-cert\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501693 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccnmv\" (UniqueName: \"kubernetes.io/projected/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-kube-api-access-ccnmv\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501718 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501743 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-etcd-client\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.501758 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-audit-dir\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: E1209 14:13:50.501805 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.001771532 +0000 UTC m=+113.927053978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.502304 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c-config\") pod \"openshift-controller-manager-operator-686468bdd5-n4pnk\" (UID: \"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.502460 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-n4pnk\" (UID: \"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.503307 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.507993 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.508413 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-n4pnk\" (UID: \"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.508609 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.509655 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.519788 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqwzr\" (UniqueName: \"kubernetes.io/projected/5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c-kube-api-access-pqwzr\") pod \"openshift-controller-manager-operator-686468bdd5-n4pnk\" (UID: \"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.541760 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" event={"ID":"0917873e-8059-49a3-aec4-f2b5152fc356","Type":"ContainerStarted","Data":"7046cbf4551987c96330ac50643a0370428fc5e32ef6e6b9e763b4fa3f1b1ca2"} Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.541871 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.541921 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv"] Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.541968 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.544389 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.545667 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.545981 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.546188 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.546314 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.546455 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.546576 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.546786 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.546934 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.547062 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.547171 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.547403 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.549728 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.549986 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.577337 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.603288 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.603525 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-trusted-ca-bundle\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.603559 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-serving-cert\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.603587 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ccnmv\" (UniqueName: \"kubernetes.io/projected/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-kube-api-access-ccnmv\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.603614 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-etcd-client\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.603633 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-audit-dir\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.603670 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-audit-policies\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.603708 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-etcd-serving-ca\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.603753 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-encryption-config\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.604627 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-audit-dir\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: E1209 14:13:50.604973 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.104951823 +0000 UTC m=+114.030234080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.609458 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-trusted-ca-bundle\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.609947 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-etcd-serving-ca\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.610162 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-audit-policies\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.610674 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-etcd-client\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.611087 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-encryption-config\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.618019 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-serving-cert\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.642261 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccnmv\" (UniqueName: \"kubernetes.io/projected/b2ab9ef6-9c83-482d-9ea5-148c66ca62bd-kube-api-access-ccnmv\") pod \"apiserver-8596bd845d-rxvxv\" (UID: \"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.705329 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9234899-46cc-4f8d-bfe6-a65d9532ba16-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-df498\" (UID: \"d9234899-46cc-4f8d-bfe6-a65d9532ba16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.705388 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs\") pod \"network-metrics-daemon-lbnx5\" (UID: \"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\") " pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.705454 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9234899-46cc-4f8d-bfe6-a65d9532ba16-config\") pod \"openshift-apiserver-operator-846cbfc458-df498\" (UID: \"d9234899-46cc-4f8d-bfe6-a65d9532ba16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.705488 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.705508 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hsn\" (UniqueName: \"kubernetes.io/projected/d9234899-46cc-4f8d-bfe6-a65d9532ba16-kube-api-access-v5hsn\") pod \"openshift-apiserver-operator-846cbfc458-df498\" (UID: \"d9234899-46cc-4f8d-bfe6-a65d9532ba16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" Dec 09 14:13:50 crc kubenswrapper[5173]: E1209 14:13:50.706168 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.206156122 +0000 UTC m=+114.131438359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.713908 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d73c2ad-08e4-439f-8c5f-adb67b27ef4b-metrics-certs\") pod \"network-metrics-daemon-lbnx5\" (UID: \"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b\") " pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.742077 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.751565 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.756710 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lbnx5" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.762987 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.809568 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.809761 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9234899-46cc-4f8d-bfe6-a65d9532ba16-config\") pod \"openshift-apiserver-operator-846cbfc458-df498\" (UID: \"d9234899-46cc-4f8d-bfe6-a65d9532ba16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.809810 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hsn\" (UniqueName: \"kubernetes.io/projected/d9234899-46cc-4f8d-bfe6-a65d9532ba16-kube-api-access-v5hsn\") pod \"openshift-apiserver-operator-846cbfc458-df498\" (UID: \"d9234899-46cc-4f8d-bfe6-a65d9532ba16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.809863 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9234899-46cc-4f8d-bfe6-a65d9532ba16-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-df498\" (UID: \"d9234899-46cc-4f8d-bfe6-a65d9532ba16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" Dec 09 14:13:50 crc kubenswrapper[5173]: E1209 14:13:50.810315 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.310283183 +0000 UTC m=+114.235565430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.816400 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9234899-46cc-4f8d-bfe6-a65d9532ba16-config\") pod \"openshift-apiserver-operator-846cbfc458-df498\" (UID: \"d9234899-46cc-4f8d-bfe6-a65d9532ba16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.823953 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9234899-46cc-4f8d-bfe6-a65d9532ba16-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-df498\" (UID: \"d9234899-46cc-4f8d-bfe6-a65d9532ba16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.826460 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hsn\" (UniqueName: \"kubernetes.io/projected/d9234899-46cc-4f8d-bfe6-a65d9532ba16-kube-api-access-v5hsn\") pod \"openshift-apiserver-operator-846cbfc458-df498\" (UID: \"d9234899-46cc-4f8d-bfe6-a65d9532ba16\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.871729 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.877642 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" Dec 09 14:13:50 crc kubenswrapper[5173]: I1209 14:13:50.917851 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:50 crc kubenswrapper[5173]: E1209 14:13:50.918813 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.418796861 +0000 UTC m=+114.344079108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.018615 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.018894 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.518877536 +0000 UTC m=+114.444159783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: W1209 14:13:51.027333 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-bbb22c094946ed0b3f8f8324762677869d17fa3fe97d2e5be1e0278a546e402c WatchSource:0}: Error finding container bbb22c094946ed0b3f8f8324762677869d17fa3fe97d2e5be1e0278a546e402c: Status 404 returned error can't find the container with id bbb22c094946ed0b3f8f8324762677869d17fa3fe97d2e5be1e0278a546e402c Dec 09 14:13:51 crc kubenswrapper[5173]: W1209 14:13:51.079009 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-a646111d188c4765a5a232c88abfd0c1261f8b54558bdb02ca5746bee8dc0b90 WatchSource:0}: Error finding container a646111d188c4765a5a232c88abfd0c1261f8b54558bdb02ca5746bee8dc0b90: Status 404 returned error can't find the container with id a646111d188c4765a5a232c88abfd0c1261f8b54558bdb02ca5746bee8dc0b90 Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.128265 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.128593 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.62858069 +0000 UTC m=+114.553862937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: W1209 14:13:51.156660 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2ab9ef6_9c83_482d_9ea5_148c66ca62bd.slice/crio-45470d1b2ca947d28bbf1ae10ad736024100f79218e1d2def4be924e11797f04 WatchSource:0}: Error finding container 45470d1b2ca947d28bbf1ae10ad736024100f79218e1d2def4be924e11797f04: Status 404 returned error can't find the container with id 45470d1b2ca947d28bbf1ae10ad736024100f79218e1d2def4be924e11797f04 Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.182379 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-zhlr7" event={"ID":"6794662c-7933-4e08-870f-c44892aef039","Type":"ContainerStarted","Data":"5b1d06ee24b7e3093e6dfc88ab8d99fb6b2da0585351f6dabd3df90de594fc87"} Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.182451 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr"] Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.184594 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" podStartSLOduration=89.184572103 podStartE2EDuration="1m29.184572103s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:50.864037016 +0000 UTC m=+113.789319263" watchObservedRunningTime="2025-12-09 14:13:51.184572103 +0000 UTC m=+114.109854350" Dec 09 14:13:51 crc kubenswrapper[5173]: W1209 14:13:51.198105 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9234899_46cc_4f8d_bfe6_a65d9532ba16.slice/crio-771bdbde0ec55234aef846ae20d0e7f680982c3a16c51e05dc8c73f3d1f6ba99 WatchSource:0}: Error finding container 771bdbde0ec55234aef846ae20d0e7f680982c3a16c51e05dc8c73f3d1f6ba99: Status 404 returned error can't find the container with id 771bdbde0ec55234aef846ae20d0e7f680982c3a16c51e05dc8c73f3d1f6ba99 Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.229802 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.230210 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.730193542 +0000 UTC m=+114.655475789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.331179 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-config\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.331480 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ggvp\" (UniqueName: \"kubernetes.io/projected/36b504f1-6aae-4802-ab5d-ce89caf2f742-kube-api-access-9ggvp\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.331603 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.331743 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b504f1-6aae-4802-ab5d-ce89caf2f742-serving-cert\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.331841 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36b504f1-6aae-4802-ab5d-ce89caf2f742-tmp\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.331881 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.831867577 +0000 UTC m=+114.757149824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.332049 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-client-ca\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.433072 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.433319 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b504f1-6aae-4802-ab5d-ce89caf2f742-serving-cert\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.433379 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36b504f1-6aae-4802-ab5d-ce89caf2f742-tmp\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.433416 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-client-ca\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.433526 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.93347657 +0000 UTC m=+114.858758857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.433619 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-config\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.433714 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9ggvp\" (UniqueName: \"kubernetes.io/projected/36b504f1-6aae-4802-ab5d-ce89caf2f742-kube-api-access-9ggvp\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.433806 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.433822 5173 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.433941 5173 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.434011 5173 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.433972 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-config podName:36b504f1-6aae-4802-ab5d-ce89caf2f742 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.933942214 +0000 UTC m=+114.859224451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-config") pod "route-controller-manager-776cdc94d6-ppjzv" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742") : object "openshift-route-controller-manager"/"config" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.434141 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36b504f1-6aae-4802-ab5d-ce89caf2f742-serving-cert podName:36b504f1-6aae-4802-ab5d-ce89caf2f742 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.934113579 +0000 UTC m=+114.859396006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/36b504f1-6aae-4802-ab5d-ce89caf2f742-serving-cert") pod "route-controller-manager-776cdc94d6-ppjzv" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.434162 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-client-ca podName:36b504f1-6aae-4802-ab5d-ce89caf2f742 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.93415306 +0000 UTC m=+114.859435557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-client-ca") pod "route-controller-manager-776cdc94d6-ppjzv" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.434466 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.934443949 +0000 UTC m=+114.859726356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.436945 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36b504f1-6aae-4802-ab5d-ce89caf2f742-tmp\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.461132 5173 projected.go:289] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.461201 5173 projected.go:289] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.461229 5173 projected.go:194] Error preparing data for projected volume kube-api-access-9ggvp for pod openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.463685 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36b504f1-6aae-4802-ab5d-ce89caf2f742-kube-api-access-9ggvp podName:36b504f1-6aae-4802-ab5d-ce89caf2f742 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:51.963641978 +0000 UTC m=+114.888924265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9ggvp" (UniqueName: "kubernetes.io/projected/36b504f1-6aae-4802-ab5d-ce89caf2f742-kube-api-access-9ggvp") pod "route-controller-manager-776cdc94d6-ppjzv" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.534938 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.535346 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.035323159 +0000 UTC m=+114.960605406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.535460 5173 patch_prober.go:28] interesting pod/downloads-747b44746d-zhlr7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.535501 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zhlr7" podUID="6794662c-7933-4e08-870f-c44892aef039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.636392 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.636785 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.136766776 +0000 UTC m=+115.062049033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.737837 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.738073 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.238042179 +0000 UTC m=+115.163324446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.738137 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.738424 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.23841623 +0000 UTC m=+115.163698477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.839697 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.839849 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.339825727 +0000 UTC m=+115.265107984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.839896 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.840272 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.340261379 +0000 UTC m=+115.265543636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.940966 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.941071 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.441048026 +0000 UTC m=+115.366330273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.942248 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-config\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.942393 5173 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.942454 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-config podName:36b504f1-6aae-4802-ab5d-ce89caf2f742 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.94244413 +0000 UTC m=+115.867726377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-config") pod "route-controller-manager-776cdc94d6-ppjzv" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742") : object "openshift-route-controller-manager"/"config" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.942552 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.942696 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b504f1-6aae-4802-ab5d-ce89caf2f742-serving-cert\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: I1209 14:13:51.942850 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-client-ca\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.943070 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.44305983 +0000 UTC m=+115.368342077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.943147 5173 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.943177 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36b504f1-6aae-4802-ab5d-ce89caf2f742-serving-cert podName:36b504f1-6aae-4802-ab5d-ce89caf2f742 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.943168643 +0000 UTC m=+115.868450890 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/36b504f1-6aae-4802-ab5d-ce89caf2f742-serving-cert") pod "route-controller-manager-776cdc94d6-ppjzv" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.943778 5173 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 09 14:13:51 crc kubenswrapper[5173]: E1209 14:13:51.943977 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-client-ca podName:36b504f1-6aae-4802-ab5d-ce89caf2f742 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.943955697 +0000 UTC m=+115.869237984 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-client-ca") pod "route-controller-manager-776cdc94d6-ppjzv" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.044216 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.044438 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.544401623 +0000 UTC m=+115.469683910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.045106 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9ggvp\" (UniqueName: \"kubernetes.io/projected/36b504f1-6aae-4802-ab5d-ce89caf2f742-kube-api-access-9ggvp\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.045448 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.045485 5173 projected.go:289] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.046029 5173 projected.go:289] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.046210 5173 projected.go:194] Error preparing data for projected volume kube-api-access-9ggvp for pod openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.046051 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.546026804 +0000 UTC m=+115.471309091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.046651 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36b504f1-6aae-4802-ab5d-ce89caf2f742-kube-api-access-9ggvp podName:36b504f1-6aae-4802-ab5d-ce89caf2f742 nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.046615513 +0000 UTC m=+115.971897800 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9ggvp" (UniqueName: "kubernetes.io/projected/36b504f1-6aae-4802-ab5d-ce89caf2f742-kube-api-access-9ggvp") pod "route-controller-manager-776cdc94d6-ppjzv" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.146772 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.147460 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.64740691 +0000 UTC m=+115.572689167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.147900 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.148527 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.648516004 +0000 UTC m=+115.573798251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.248940 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.249186 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.749137815 +0000 UTC m=+115.674420072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.249602 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.250180 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.750158127 +0000 UTC m=+115.675440374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.351270 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.351567 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.851544423 +0000 UTC m=+115.776826670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.453193 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.454081 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:52.954055213 +0000 UTC m=+115.879337460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.540271 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-qjcxb"] Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.540455 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.541133 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.541275 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-zhlr7" podStartSLOduration=91.541266448 podStartE2EDuration="1m31.541266448s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:51.548004554 +0000 UTC m=+114.473286831" watchObservedRunningTime="2025-12-09 14:13:52.541266448 +0000 UTC m=+115.466548685" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.543325 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.543325 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.543719 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.544196 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.546287 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.546566 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.546856 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.547077 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.547287 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.548189 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.554792 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.555039 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.055014766 +0000 UTC m=+115.980297033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.660085 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f63208-f276-4c44-ad67-77446cad7193-config\") pod \"kube-controller-manager-operator-69d5f845f8-h5hkr\" (UID: \"41f63208-f276-4c44-ad67-77446cad7193\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.660140 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/41f63208-f276-4c44-ad67-77446cad7193-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-h5hkr\" (UID: \"41f63208-f276-4c44-ad67-77446cad7193\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.660172 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41f63208-f276-4c44-ad67-77446cad7193-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-h5hkr\" (UID: \"41f63208-f276-4c44-ad67-77446cad7193\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.660243 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f63208-f276-4c44-ad67-77446cad7193-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-h5hkr\" (UID: \"41f63208-f276-4c44-ad67-77446cad7193\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.660381 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.662394 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.162378138 +0000 UTC m=+116.087660385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.688253 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58" event={"ID":"5263b977-f1d9-4b01-9cd3-25a488d46ac7","Type":"ContainerStarted","Data":"3f441fe283df0df67f0e2cf1ab3c3aa92e8adad7d892e47ab1756c890c7214a0"} Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.688313 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-tnx4d"] Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.688638 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.696927 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.697471 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.697877 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.699937 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.700032 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.700309 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.760966 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.761172 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f63208-f276-4c44-ad67-77446cad7193-config\") pod \"kube-controller-manager-operator-69d5f845f8-h5hkr\" (UID: \"41f63208-f276-4c44-ad67-77446cad7193\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.761195 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/41f63208-f276-4c44-ad67-77446cad7193-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-h5hkr\" (UID: \"41f63208-f276-4c44-ad67-77446cad7193\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.761214 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41f63208-f276-4c44-ad67-77446cad7193-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-h5hkr\" (UID: \"41f63208-f276-4c44-ad67-77446cad7193\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.761241 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f63208-f276-4c44-ad67-77446cad7193-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-h5hkr\" (UID: \"41f63208-f276-4c44-ad67-77446cad7193\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.762828 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f63208-f276-4c44-ad67-77446cad7193-config\") pod \"kube-controller-manager-operator-69d5f845f8-h5hkr\" (UID: \"41f63208-f276-4c44-ad67-77446cad7193\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.762886 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/41f63208-f276-4c44-ad67-77446cad7193-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-h5hkr\" (UID: \"41f63208-f276-4c44-ad67-77446cad7193\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.762958 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.262931637 +0000 UTC m=+116.188213894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.780607 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f63208-f276-4c44-ad67-77446cad7193-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-h5hkr\" (UID: \"41f63208-f276-4c44-ad67-77446cad7193\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.784654 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41f63208-f276-4c44-ad67-77446cad7193-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-h5hkr\" (UID: \"41f63208-f276-4c44-ad67-77446cad7193\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.787164 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-zhlr7" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.787203 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" event={"ID":"76317343-bf5b-441f-ae79-e09f3d1188cd","Type":"ContainerStarted","Data":"236cf3b947f87fdab2ab5ef79d3412dbfc18f6efbb7b82ee6d006e38aab20398"} Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.787383 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.792441 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.792669 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.793400 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.793571 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.793720 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.794150 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.794266 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.798620 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" event={"ID":"76317343-bf5b-441f-ae79-e09f3d1188cd","Type":"ContainerStarted","Data":"6f5d7a3732c6f3f2b40423d35dab785f929127bfc052efe9f2de1a8995b20e05"} Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.798669 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" event={"ID":"07abd9d6-5952-41d9-aea4-ae02adf03b84","Type":"ContainerStarted","Data":"26868a8101aec4d9c03d7f99cccf28fdb5aca93463375f51bd08ab0126ee7677"} Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.798691 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-z495l"] Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.831228 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" event={"ID":"4d271279-fdf8-48d7-b1d8-1b05fee604d4","Type":"ContainerStarted","Data":"034485b5cd12b3c5c2745b4348312e11ea88de8db716778202ca32f68aadd55e"} Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.831299 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-q5kgl" event={"ID":"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b","Type":"ContainerStarted","Data":"0f45c9af2c4df8e9796bef7b4b0b5eb18f1128e6111cf4b3e96dec77b70d6e0e"} Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.831325 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" event={"ID":"4751d5a1-9958-4f4f-aa73-a94b587a09b7","Type":"ContainerStarted","Data":"5b50cb46dfb5e44f82920aed495d8438c2d76d2b5d1569f4f7f1f7e9bf30e46b"} Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.831339 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" event={"ID":"eb0c4171-4c7a-4d9c-a467-47895e7dca09","Type":"ContainerStarted","Data":"94e85b817f73448145dc1d04c059d2bb0de2b062a68f2ef4ff7c588702c713c7"} Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.831341 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.831368 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-zhlr7" event={"ID":"6794662c-7933-4e08-870f-c44892aef039","Type":"ContainerStarted","Data":"4eb7df6d0c8bd6227ea4815369ff142bf2afa5835a9ea937a5bd021301b1c76f"} Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.831841 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c"] Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.833160 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.833406 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.833530 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.834920 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.836129 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.840945 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.862428 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff9b667-97da-48d5-85b6-7c02806cc6c6-config\") pod \"machine-api-operator-755bb95488-qjcxb\" (UID: \"7ff9b667-97da-48d5-85b6-7c02806cc6c6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.862488 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcvjm\" (UniqueName: \"kubernetes.io/projected/7ff9b667-97da-48d5-85b6-7c02806cc6c6-kube-api-access-xcvjm\") pod \"machine-api-operator-755bb95488-qjcxb\" (UID: \"7ff9b667-97da-48d5-85b6-7c02806cc6c6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.862600 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7ff9b667-97da-48d5-85b6-7c02806cc6c6-images\") pod \"machine-api-operator-755bb95488-qjcxb\" (UID: \"7ff9b667-97da-48d5-85b6-7c02806cc6c6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.862638 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ff9b667-97da-48d5-85b6-7c02806cc6c6-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-qjcxb\" (UID: \"7ff9b667-97da-48d5-85b6-7c02806cc6c6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.862949 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.863444 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.363430725 +0000 UTC m=+116.288712972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.866204 5173 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-54cg5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.866268 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" podUID="4751d5a1-9958-4f4f-aa73-a94b587a09b7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.872757 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.898730 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-rqxjt"] Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.899218 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.902836 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" podStartSLOduration=91.90280965 podStartE2EDuration="1m31.90280965s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:52.886930977 +0000 UTC m=+115.812213224" watchObservedRunningTime="2025-12-09 14:13:52.90280965 +0000 UTC m=+115.828091898" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.911642 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.911813 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.912203 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.912335 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.954388 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.956087 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc"] Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.955887 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-rqxjt" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.959721 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" podStartSLOduration=90.959681281 podStartE2EDuration="1m30.959681281s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:52.911594724 +0000 UTC m=+115.836877001" watchObservedRunningTime="2025-12-09 14:13:52.959681281 +0000 UTC m=+115.884963528" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.960950 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-q5kgl" podStartSLOduration=91.96094424 podStartE2EDuration="1m31.96094424s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:52.934771285 +0000 UTC m=+115.860053562" watchObservedRunningTime="2025-12-09 14:13:52.96094424 +0000 UTC m=+115.886226477" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.963604 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.964280 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.965263 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.966088 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.465929645 +0000 UTC m=+116.391212082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.966254 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ff9b667-97da-48d5-85b6-7c02806cc6c6-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-qjcxb\" (UID: \"7ff9b667-97da-48d5-85b6-7c02806cc6c6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.966339 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.966489 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-stats-auth\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.966571 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e8f532f-b948-4468-9397-7318c60c6fa8-config\") pod \"console-operator-67c89758df-z495l\" (UID: \"5e8f532f-b948-4468-9397-7318c60c6fa8\") " pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.966599 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stbhh\" (UniqueName: \"kubernetes.io/projected/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-kube-api-access-stbhh\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.966745 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-default-certificate\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.966971 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff9b667-97da-48d5-85b6-7c02806cc6c6-config\") pod \"machine-api-operator-755bb95488-qjcxb\" (UID: \"7ff9b667-97da-48d5-85b6-7c02806cc6c6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.967003 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xcvjm\" (UniqueName: \"kubernetes.io/projected/7ff9b667-97da-48d5-85b6-7c02806cc6c6-kube-api-access-xcvjm\") pod \"machine-api-operator-755bb95488-qjcxb\" (UID: \"7ff9b667-97da-48d5-85b6-7c02806cc6c6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.967173 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b504f1-6aae-4802-ab5d-ce89caf2f742-serving-cert\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.967341 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-service-ca-bundle\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.967410 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-metrics-certs\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.969266 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-client-ca\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.969379 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e8f532f-b948-4468-9397-7318c60c6fa8-serving-cert\") pod \"console-operator-67c89758df-z495l\" (UID: \"5e8f532f-b948-4468-9397-7318c60c6fa8\") " pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.969566 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e8f532f-b948-4468-9397-7318c60c6fa8-trusted-ca\") pod \"console-operator-67c89758df-z495l\" (UID: \"5e8f532f-b948-4468-9397-7318c60c6fa8\") " pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:52 crc kubenswrapper[5173]: E1209 14:13:52.970598 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.47057595 +0000 UTC m=+116.395858197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.971361 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg5zk\" (UniqueName: \"kubernetes.io/projected/5e8f532f-b948-4468-9397-7318c60c6fa8-kube-api-access-cg5zk\") pod \"console-operator-67c89758df-z495l\" (UID: \"5e8f532f-b948-4468-9397-7318c60c6fa8\") " pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.971475 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-config\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.971521 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7ff9b667-97da-48d5-85b6-7c02806cc6c6-images\") pod \"machine-api-operator-755bb95488-qjcxb\" (UID: \"7ff9b667-97da-48d5-85b6-7c02806cc6c6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.971783 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-client-ca\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.972960 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7ff9b667-97da-48d5-85b6-7c02806cc6c6-images\") pod \"machine-api-operator-755bb95488-qjcxb\" (UID: \"7ff9b667-97da-48d5-85b6-7c02806cc6c6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.973988 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-config\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.977767 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff9b667-97da-48d5-85b6-7c02806cc6c6-config\") pod \"machine-api-operator-755bb95488-qjcxb\" (UID: \"7ff9b667-97da-48d5-85b6-7c02806cc6c6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.980918 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7ff9b667-97da-48d5-85b6-7c02806cc6c6-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-qjcxb\" (UID: \"7ff9b667-97da-48d5-85b6-7c02806cc6c6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.986174 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b504f1-6aae-4802-ab5d-ce89caf2f742-serving-cert\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:52 crc kubenswrapper[5173]: I1209 14:13:52.992548 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcvjm\" (UniqueName: \"kubernetes.io/projected/7ff9b667-97da-48d5-85b6-7c02806cc6c6-kube-api-access-xcvjm\") pod \"machine-api-operator-755bb95488-qjcxb\" (UID: \"7ff9b667-97da-48d5-85b6-7c02806cc6c6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.049086 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.074669 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58" event={"ID":"5263b977-f1d9-4b01-9cd3-25a488d46ac7","Type":"ContainerStarted","Data":"a3ee805e5f6ce43678db1e29b0bf3d66ae90420cf915b61400822c05347069b6"} Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.075025 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z9d5g"] Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.074946 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.076721 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.576701463 +0000 UTC m=+116.501983710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.079807 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.080076 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.080494 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.58047528 +0000 UTC m=+116.505757527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.081027 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.081235 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.081497 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.081663 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03517711-3312-4a4f-8ede-4d39051bd092-config\") pod \"kube-apiserver-operator-575994946d-tg55c\" (UID: \"03517711-3312-4a4f-8ede-4d39051bd092\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.082279 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-service-ca-bundle\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.082306 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/99788f65-7403-4cb0-91bb-f318172f7171-webhook-certs\") pod \"multus-admission-controller-69db94689b-rqxjt\" (UID: \"99788f65-7403-4cb0-91bb-f318172f7171\") " pod="openshift-multus/multus-admission-controller-69db94689b-rqxjt" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.082913 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.086871 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrbqp\" (UniqueName: \"kubernetes.io/projected/99788f65-7403-4cb0-91bb-f318172f7171-kube-api-access-lrbqp\") pod \"multus-admission-controller-69db94689b-rqxjt\" (UID: \"99788f65-7403-4cb0-91bb-f318172f7171\") " pod="openshift-multus/multus-admission-controller-69db94689b-rqxjt" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.086909 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e8f532f-b948-4468-9397-7318c60c6fa8-serving-cert\") pod \"console-operator-67c89758df-z495l\" (UID: \"5e8f532f-b948-4468-9397-7318c60c6fa8\") " pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.086940 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e8f532f-b948-4468-9397-7318c60c6fa8-trusted-ca\") pod \"console-operator-67c89758df-z495l\" (UID: \"5e8f532f-b948-4468-9397-7318c60c6fa8\") " pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.086960 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cg5zk\" (UniqueName: \"kubernetes.io/projected/5e8f532f-b948-4468-9397-7318c60c6fa8-kube-api-access-cg5zk\") pod \"console-operator-67c89758df-z495l\" (UID: \"5e8f532f-b948-4468-9397-7318c60c6fa8\") " pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.087019 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9ggvp\" (UniqueName: \"kubernetes.io/projected/36b504f1-6aae-4802-ab5d-ce89caf2f742-kube-api-access-9ggvp\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.087279 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03517711-3312-4a4f-8ede-4d39051bd092-kube-api-access\") pod \"kube-apiserver-operator-575994946d-tg55c\" (UID: \"03517711-3312-4a4f-8ede-4d39051bd092\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.087305 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-stats-auth\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.087326 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e8f532f-b948-4468-9397-7318c60c6fa8-config\") pod \"console-operator-67c89758df-z495l\" (UID: \"5e8f532f-b948-4468-9397-7318c60c6fa8\") " pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.087344 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-stbhh\" (UniqueName: \"kubernetes.io/projected/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-kube-api-access-stbhh\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.087436 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-metrics-certs\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.087483 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03517711-3312-4a4f-8ede-4d39051bd092-serving-cert\") pod \"kube-apiserver-operator-575994946d-tg55c\" (UID: \"03517711-3312-4a4f-8ede-4d39051bd092\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.087501 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-default-certificate\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.087519 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/03517711-3312-4a4f-8ede-4d39051bd092-tmp-dir\") pod \"kube-apiserver-operator-575994946d-tg55c\" (UID: \"03517711-3312-4a4f-8ede-4d39051bd092\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.087796 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-service-ca-bundle\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.089127 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e8f532f-b948-4468-9397-7318c60c6fa8-config\") pod \"console-operator-67c89758df-z495l\" (UID: \"5e8f532f-b948-4468-9397-7318c60c6fa8\") " pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.092461 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e8f532f-b948-4468-9397-7318c60c6fa8-trusted-ca\") pod \"console-operator-67c89758df-z495l\" (UID: \"5e8f532f-b948-4468-9397-7318c60c6fa8\") " pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.097514 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-default-certificate\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.100562 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ggvp\" (UniqueName: \"kubernetes.io/projected/36b504f1-6aae-4802-ab5d-ce89caf2f742-kube-api-access-9ggvp\") pod \"route-controller-manager-776cdc94d6-ppjzv\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.107581 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-stats-auth\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.107712 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg5zk\" (UniqueName: \"kubernetes.io/projected/5e8f532f-b948-4468-9397-7318c60c6fa8-kube-api-access-cg5zk\") pod \"console-operator-67c89758df-z495l\" (UID: \"5e8f532f-b948-4468-9397-7318c60c6fa8\") " pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.113705 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e8f532f-b948-4468-9397-7318c60c6fa8-serving-cert\") pod \"console-operator-67c89758df-z495l\" (UID: \"5e8f532f-b948-4468-9397-7318c60c6fa8\") " pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.116775 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" event={"ID":"76317343-bf5b-441f-ae79-e09f3d1188cd","Type":"ContainerDied","Data":"236cf3b947f87fdab2ab5ef79d3412dbfc18f6efbb7b82ee6d006e38aab20398"} Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.116958 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj"] Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.117829 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-stbhh\" (UniqueName: \"kubernetes.io/projected/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-kube-api-access-stbhh\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.118045 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.120198 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.120530 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.122862 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.123262 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.128206 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.141709 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/139a1ff9-4912-4a2c-b0d2-c220452ab9f2-metrics-certs\") pod \"router-default-68cf44c8b8-tnx4d\" (UID: \"139a1ff9-4912-4a2c-b0d2-c220452ab9f2\") " pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.150949 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.159256 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.191443 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.191589 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.691554797 +0000 UTC m=+116.616837054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.192257 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/99788f65-7403-4cb0-91bb-f318172f7171-webhook-certs\") pod \"multus-admission-controller-69db94689b-rqxjt\" (UID: \"99788f65-7403-4cb0-91bb-f318172f7171\") " pod="openshift-multus/multus-admission-controller-69db94689b-rqxjt" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.192312 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lrbqp\" (UniqueName: \"kubernetes.io/projected/99788f65-7403-4cb0-91bb-f318172f7171-kube-api-access-lrbqp\") pod \"multus-admission-controller-69db94689b-rqxjt\" (UID: \"99788f65-7403-4cb0-91bb-f318172f7171\") " pod="openshift-multus/multus-admission-controller-69db94689b-rqxjt" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.192373 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7-webhook-cert\") pod \"packageserver-7d4fc7d867-d66gc\" (UID: \"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.192405 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03517711-3312-4a4f-8ede-4d39051bd092-kube-api-access\") pod \"kube-apiserver-operator-575994946d-tg55c\" (UID: \"03517711-3312-4a4f-8ede-4d39051bd092\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.192430 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7-apiservice-cert\") pod \"packageserver-7d4fc7d867-d66gc\" (UID: \"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.192470 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03517711-3312-4a4f-8ede-4d39051bd092-serving-cert\") pod \"kube-apiserver-operator-575994946d-tg55c\" (UID: \"03517711-3312-4a4f-8ede-4d39051bd092\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.192492 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/03517711-3312-4a4f-8ede-4d39051bd092-tmp-dir\") pod \"kube-apiserver-operator-575994946d-tg55c\" (UID: \"03517711-3312-4a4f-8ede-4d39051bd092\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.192528 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqf4b\" (UniqueName: \"kubernetes.io/projected/ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7-kube-api-access-kqf4b\") pod \"packageserver-7d4fc7d867-d66gc\" (UID: \"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.192766 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7-tmpfs\") pod \"packageserver-7d4fc7d867-d66gc\" (UID: \"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.192859 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.192921 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03517711-3312-4a4f-8ede-4d39051bd092-config\") pod \"kube-apiserver-operator-575994946d-tg55c\" (UID: \"03517711-3312-4a4f-8ede-4d39051bd092\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.193090 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/03517711-3312-4a4f-8ede-4d39051bd092-tmp-dir\") pod \"kube-apiserver-operator-575994946d-tg55c\" (UID: \"03517711-3312-4a4f-8ede-4d39051bd092\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.193182 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.693170627 +0000 UTC m=+116.618452954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.194402 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03517711-3312-4a4f-8ede-4d39051bd092-config\") pod \"kube-apiserver-operator-575994946d-tg55c\" (UID: \"03517711-3312-4a4f-8ede-4d39051bd092\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.199959 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03517711-3312-4a4f-8ede-4d39051bd092-serving-cert\") pod \"kube-apiserver-operator-575994946d-tg55c\" (UID: \"03517711-3312-4a4f-8ede-4d39051bd092\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.200052 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/99788f65-7403-4cb0-91bb-f318172f7171-webhook-certs\") pod \"multus-admission-controller-69db94689b-rqxjt\" (UID: \"99788f65-7403-4cb0-91bb-f318172f7171\") " pod="openshift-multus/multus-admission-controller-69db94689b-rqxjt" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.223663 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrbqp\" (UniqueName: \"kubernetes.io/projected/99788f65-7403-4cb0-91bb-f318172f7171-kube-api-access-lrbqp\") pod \"multus-admission-controller-69db94689b-rqxjt\" (UID: \"99788f65-7403-4cb0-91bb-f318172f7171\") " pod="openshift-multus/multus-admission-controller-69db94689b-rqxjt" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.232714 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03517711-3312-4a4f-8ede-4d39051bd092-kube-api-access\") pod \"kube-apiserver-operator-575994946d-tg55c\" (UID: \"03517711-3312-4a4f-8ede-4d39051bd092\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.253535 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.266190 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg"] Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.267089 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.271684 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.271943 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.293761 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.293943 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d171fe05-fe49-46fb-9407-bdc1f9272d4b-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z9d5g\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.294011 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7-webhook-cert\") pod \"packageserver-7d4fc7d867-d66gc\" (UID: \"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.294072 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7-apiservice-cert\") pod \"packageserver-7d4fc7d867-d66gc\" (UID: \"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.294142 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf76p\" (UniqueName: \"kubernetes.io/projected/d171fe05-fe49-46fb-9407-bdc1f9272d4b-kube-api-access-zf76p\") pod \"marketplace-operator-547dbd544d-z9d5g\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.294187 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kqf4b\" (UniqueName: \"kubernetes.io/projected/ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7-kube-api-access-kqf4b\") pod \"packageserver-7d4fc7d867-d66gc\" (UID: \"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.294209 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d171fe05-fe49-46fb-9407-bdc1f9272d4b-tmp\") pod \"marketplace-operator-547dbd544d-z9d5g\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.294234 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7-tmpfs\") pod \"packageserver-7d4fc7d867-d66gc\" (UID: \"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.294262 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d171fe05-fe49-46fb-9407-bdc1f9272d4b-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z9d5g\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.294405 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.794385118 +0000 UTC m=+116.719667365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.297368 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7-tmpfs\") pod \"packageserver-7d4fc7d867-d66gc\" (UID: \"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.315948 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7-webhook-cert\") pod \"packageserver-7d4fc7d867-d66gc\" (UID: \"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.318617 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7-apiservice-cert\") pod \"packageserver-7d4fc7d867-d66gc\" (UID: \"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.318429 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-rqxjt" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.319995 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqf4b\" (UniqueName: \"kubernetes.io/projected/ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7-kube-api-access-kqf4b\") pod \"packageserver-7d4fc7d867-d66gc\" (UID: \"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.396504 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d171fe05-fe49-46fb-9407-bdc1f9272d4b-tmp\") pod \"marketplace-operator-547dbd544d-z9d5g\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.397116 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.397176 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58-srv-cert\") pod \"olm-operator-5cdf44d969-b8zmj\" (UID: \"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.397211 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d171fe05-fe49-46fb-9407-bdc1f9272d4b-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z9d5g\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.397250 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58-tmpfs\") pod \"olm-operator-5cdf44d969-b8zmj\" (UID: \"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.397278 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d171fe05-fe49-46fb-9407-bdc1f9272d4b-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z9d5g\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.397403 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zf76p\" (UniqueName: \"kubernetes.io/projected/d171fe05-fe49-46fb-9407-bdc1f9272d4b-kube-api-access-zf76p\") pod \"marketplace-operator-547dbd544d-z9d5g\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.397429 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dh7f\" (UniqueName: \"kubernetes.io/projected/f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58-kube-api-access-9dh7f\") pod \"olm-operator-5cdf44d969-b8zmj\" (UID: \"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.397457 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58-profile-collector-cert\") pod \"olm-operator-5cdf44d969-b8zmj\" (UID: \"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.397987 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d171fe05-fe49-46fb-9407-bdc1f9272d4b-tmp\") pod \"marketplace-operator-547dbd544d-z9d5g\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.399522 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:53.899505199 +0000 UTC m=+116.824787446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.401467 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d171fe05-fe49-46fb-9407-bdc1f9272d4b-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z9d5g\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.410463 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d171fe05-fe49-46fb-9407-bdc1f9272d4b-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z9d5g\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.418603 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf76p\" (UniqueName: \"kubernetes.io/projected/d171fe05-fe49-46fb-9407-bdc1f9272d4b-kube-api-access-zf76p\") pod \"marketplace-operator-547dbd544d-z9d5g\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.420166 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.471848 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" event={"ID":"683a6416-7033-4896-9e1e-be8b31f74d38","Type":"ContainerStarted","Data":"e89ee0f2cfb224eab99158c5585208668497f7fab76b50e5a1851b51bb1993f7"} Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.471916 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7"] Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.473120 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.476171 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.501772 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.501987 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.001949508 +0000 UTC m=+116.927231885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.502122 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9dh7f\" (UniqueName: \"kubernetes.io/projected/f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58-kube-api-access-9dh7f\") pod \"olm-operator-5cdf44d969-b8zmj\" (UID: \"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.502200 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58-profile-collector-cert\") pod \"olm-operator-5cdf44d969-b8zmj\" (UID: \"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.502369 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.502419 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58-srv-cert\") pod \"olm-operator-5cdf44d969-b8zmj\" (UID: \"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.502501 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58-tmpfs\") pod \"olm-operator-5cdf44d969-b8zmj\" (UID: \"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.502854 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.002837856 +0000 UTC m=+116.928120103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.506255 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58-tmpfs\") pod \"olm-operator-5cdf44d969-b8zmj\" (UID: \"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.524342 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.542513 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.558212 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm"] Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.559169 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.563544 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.564024 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.570072 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.570801 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.571006 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.578529 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58-profile-collector-cert\") pod \"olm-operator-5cdf44d969-b8zmj\" (UID: \"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.584415 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dh7f\" (UniqueName: \"kubernetes.io/projected/f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58-kube-api-access-9dh7f\") pod \"olm-operator-5cdf44d969-b8zmj\" (UID: \"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: W1209 14:13:53.584999 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff9b667_97da_48d5_85b6_7c02806cc6c6.slice/crio-8fc5a5912c6c444cc67800d7b901140b080a655d2a0c24274febaf9a7669c412 WatchSource:0}: Error finding container 8fc5a5912c6c444cc67800d7b901140b080a655d2a0c24274febaf9a7669c412: Status 404 returned error can't find the container with id 8fc5a5912c6c444cc67800d7b901140b080a655d2a0c24274febaf9a7669c412 Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.592230 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58-srv-cert\") pod \"olm-operator-5cdf44d969-b8zmj\" (UID: \"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.609620 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.609754 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.109723922 +0000 UTC m=+117.035006169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.610469 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9jn\" (UniqueName: \"kubernetes.io/projected/84c3a797-e34a-463b-b598-7b75849c651b-kube-api-access-zn9jn\") pod \"package-server-manager-77f986bd66-s7fzg\" (UID: \"84c3a797-e34a-463b-b598-7b75849c651b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.610510 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2105b69-54d7-4854-ba11-9108ad09016d-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-wznw7\" (UID: \"d2105b69-54d7-4854-ba11-9108ad09016d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.610648 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfwhg\" (UniqueName: \"kubernetes.io/projected/d2105b69-54d7-4854-ba11-9108ad09016d-kube-api-access-xfwhg\") pod \"kube-storage-version-migrator-operator-565b79b866-wznw7\" (UID: \"d2105b69-54d7-4854-ba11-9108ad09016d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.610745 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.610896 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2105b69-54d7-4854-ba11-9108ad09016d-config\") pod \"kube-storage-version-migrator-operator-565b79b866-wznw7\" (UID: \"d2105b69-54d7-4854-ba11-9108ad09016d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.610953 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/84c3a797-e34a-463b-b598-7b75849c651b-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-s7fzg\" (UID: \"84c3a797-e34a-463b-b598-7b75849c651b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.611531 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.111510928 +0000 UTC m=+117.036793175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.613738 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4"] Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.615913 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.625415 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.625621 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.625727 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.626307 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.626577 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.649207 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.682441 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-q5kgl" event={"ID":"a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b","Type":"ContainerStarted","Data":"6e8d3ac68b0b35db350dc2a67a5c5e6cc6f51ca46b29378d729de84661d3f9af"} Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.682530 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p" event={"ID":"a58f4e37-afc4-442b-b93e-87303f0dbdb6","Type":"ContainerStarted","Data":"6905359cfb45e89d2fa18e08cddccae68cdb0127f97eedc4adc6d65e8cb51135"} Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.682547 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-j76wj"] Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.683054 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.687469 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.687543 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.687641 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.687681 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.706961 5173 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-54cg5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.707033 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" podUID="4751d5a1-9958-4f4f-aa73-a94b587a09b7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.708594 5173 patch_prober.go:28] interesting pod/downloads-747b44746d-zhlr7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.708640 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zhlr7" podUID="6794662c-7933-4e08-870f-c44892aef039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.714197 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.714413 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zn9jn\" (UniqueName: \"kubernetes.io/projected/84c3a797-e34a-463b-b598-7b75849c651b-kube-api-access-zn9jn\") pod \"package-server-manager-77f986bd66-s7fzg\" (UID: \"84c3a797-e34a-463b-b598-7b75849c651b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.714446 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2105b69-54d7-4854-ba11-9108ad09016d-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-wznw7\" (UID: \"d2105b69-54d7-4854-ba11-9108ad09016d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.714493 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xfwhg\" (UniqueName: \"kubernetes.io/projected/d2105b69-54d7-4854-ba11-9108ad09016d-kube-api-access-xfwhg\") pod \"kube-storage-version-migrator-operator-565b79b866-wznw7\" (UID: \"d2105b69-54d7-4854-ba11-9108ad09016d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.714529 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2105b69-54d7-4854-ba11-9108ad09016d-config\") pod \"kube-storage-version-migrator-operator-565b79b866-wznw7\" (UID: \"d2105b69-54d7-4854-ba11-9108ad09016d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.714559 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/84c3a797-e34a-463b-b598-7b75849c651b-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-s7fzg\" (UID: \"84c3a797-e34a-463b-b598-7b75849c651b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.715758 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.215732911 +0000 UTC m=+117.141015188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.726829 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2105b69-54d7-4854-ba11-9108ad09016d-config\") pod \"kube-storage-version-migrator-operator-565b79b866-wznw7\" (UID: \"d2105b69-54d7-4854-ba11-9108ad09016d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.739292 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" event={"ID":"0917873e-8059-49a3-aec4-f2b5152fc356","Type":"ContainerDied","Data":"5ddde056f3ac35b0c8b66cffd71d82c7aeecd2d0cb97f242816bdfc0e8a1fee7"} Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.739498 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-j76wj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.739524 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t"] Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.743089 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2105b69-54d7-4854-ba11-9108ad09016d-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-wznw7\" (UID: \"d2105b69-54d7-4854-ba11-9108ad09016d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.744587 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.744595 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.744849 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.745095 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.745217 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 09 14:13:53 crc kubenswrapper[5173]: W1209 14:13:53.750215 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99788f65_7403_4cb0_91bb_f318172f7171.slice/crio-e831bcbf76390653ae9caec8ce53731e4840dc1bf17cc1f843468a82036330ee WatchSource:0}: Error finding container e831bcbf76390653ae9caec8ce53731e4840dc1bf17cc1f843468a82036330ee: Status 404 returned error can't find the container with id e831bcbf76390653ae9caec8ce53731e4840dc1bf17cc1f843468a82036330ee Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.751898 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfwhg\" (UniqueName: \"kubernetes.io/projected/d2105b69-54d7-4854-ba11-9108ad09016d-kube-api-access-xfwhg\") pod \"kube-storage-version-migrator-operator-565b79b866-wznw7\" (UID: \"d2105b69-54d7-4854-ba11-9108ad09016d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.762382 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn9jn\" (UniqueName: \"kubernetes.io/projected/84c3a797-e34a-463b-b598-7b75849c651b-kube-api-access-zn9jn\") pod \"package-server-manager-77f986bd66-s7fzg\" (UID: \"84c3a797-e34a-463b-b598-7b75849c651b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.764653 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/84c3a797-e34a-463b-b598-7b75849c651b-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-s7fzg\" (UID: \"84c3a797-e34a-463b-b598-7b75849c651b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.805646 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.815392 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9f157b4-58c8-4daf-81bc-87cd621d3d55-config\") pod \"service-ca-operator-5b9c976747-rbkgm\" (UID: \"f9f157b4-58c8-4daf-81bc-87cd621d3d55\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.815439 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.815506 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b32aad18-fc40-4128-96a6-b4d1b3de9cb5-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-5fld4\" (UID: \"b32aad18-fc40-4128-96a6-b4d1b3de9cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.815549 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b32aad18-fc40-4128-96a6-b4d1b3de9cb5-config\") pod \"openshift-kube-scheduler-operator-54f497555d-5fld4\" (UID: \"b32aad18-fc40-4128-96a6-b4d1b3de9cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.815588 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b32aad18-fc40-4128-96a6-b4d1b3de9cb5-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-5fld4\" (UID: \"b32aad18-fc40-4128-96a6-b4d1b3de9cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.817112 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.317089606 +0000 UTC m=+117.242371903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.818026 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b32aad18-fc40-4128-96a6-b4d1b3de9cb5-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-5fld4\" (UID: \"b32aad18-fc40-4128-96a6-b4d1b3de9cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.818111 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnkwg\" (UniqueName: \"kubernetes.io/projected/f9f157b4-58c8-4daf-81bc-87cd621d3d55-kube-api-access-vnkwg\") pod \"service-ca-operator-5b9c976747-rbkgm\" (UID: \"f9f157b4-58c8-4daf-81bc-87cd621d3d55\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.818199 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f157b4-58c8-4daf-81bc-87cd621d3d55-serving-cert\") pod \"service-ca-operator-5b9c976747-rbkgm\" (UID: \"f9f157b4-58c8-4daf-81bc-87cd621d3d55\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.835043 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc"] Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.835428 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.839546 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.839791 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.905822 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-zmsp9"] Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.906253 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.919414 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.920442 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.920758 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9f157b4-58c8-4daf-81bc-87cd621d3d55-config\") pod \"service-ca-operator-5b9c976747-rbkgm\" (UID: \"f9f157b4-58c8-4daf-81bc-87cd621d3d55\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.920815 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ab63887-5fdd-419d-a3cc-af0c227e114a-signing-key\") pod \"service-ca-74545575db-j76wj\" (UID: \"7ab63887-5fdd-419d-a3cc-af0c227e114a\") " pod="openshift-service-ca/service-ca-74545575db-j76wj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.920864 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b32aad18-fc40-4128-96a6-b4d1b3de9cb5-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-5fld4\" (UID: \"b32aad18-fc40-4128-96a6-b4d1b3de9cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.920893 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmxql\" (UniqueName: \"kubernetes.io/projected/7ab63887-5fdd-419d-a3cc-af0c227e114a-kube-api-access-kmxql\") pod \"service-ca-74545575db-j76wj\" (UID: \"7ab63887-5fdd-419d-a3cc-af0c227e114a\") " pod="openshift-service-ca/service-ca-74545575db-j76wj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.921756 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9f157b4-58c8-4daf-81bc-87cd621d3d55-config\") pod \"service-ca-operator-5b9c976747-rbkgm\" (UID: \"f9f157b4-58c8-4daf-81bc-87cd621d3d55\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" Dec 09 14:13:53 crc kubenswrapper[5173]: E1209 14:13:53.921868 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.421842167 +0000 UTC m=+117.347124414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.922835 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b32aad18-fc40-4128-96a6-b4d1b3de9cb5-config\") pod \"openshift-kube-scheduler-operator-54f497555d-5fld4\" (UID: \"b32aad18-fc40-4128-96a6-b4d1b3de9cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.922899 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b32aad18-fc40-4128-96a6-b4d1b3de9cb5-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-5fld4\" (UID: \"b32aad18-fc40-4128-96a6-b4d1b3de9cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.923014 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b32aad18-fc40-4128-96a6-b4d1b3de9cb5-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-5fld4\" (UID: \"b32aad18-fc40-4128-96a6-b4d1b3de9cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.923075 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ab63887-5fdd-419d-a3cc-af0c227e114a-signing-cabundle\") pod \"service-ca-74545575db-j76wj\" (UID: \"7ab63887-5fdd-419d-a3cc-af0c227e114a\") " pod="openshift-service-ca/service-ca-74545575db-j76wj" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.923149 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vnkwg\" (UniqueName: \"kubernetes.io/projected/f9f157b4-58c8-4daf-81bc-87cd621d3d55-kube-api-access-vnkwg\") pod \"service-ca-operator-5b9c976747-rbkgm\" (UID: \"f9f157b4-58c8-4daf-81bc-87cd621d3d55\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.923252 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f157b4-58c8-4daf-81bc-87cd621d3d55-serving-cert\") pod \"service-ca-operator-5b9c976747-rbkgm\" (UID: \"f9f157b4-58c8-4daf-81bc-87cd621d3d55\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.924242 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b32aad18-fc40-4128-96a6-b4d1b3de9cb5-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-5fld4\" (UID: \"b32aad18-fc40-4128-96a6-b4d1b3de9cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.925050 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b32aad18-fc40-4128-96a6-b4d1b3de9cb5-config\") pod \"openshift-kube-scheduler-operator-54f497555d-5fld4\" (UID: \"b32aad18-fc40-4128-96a6-b4d1b3de9cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.928577 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.946097 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b32aad18-fc40-4128-96a6-b4d1b3de9cb5-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-5fld4\" (UID: \"b32aad18-fc40-4128-96a6-b4d1b3de9cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.952784 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f157b4-58c8-4daf-81bc-87cd621d3d55-serving-cert\") pod \"service-ca-operator-5b9c976747-rbkgm\" (UID: \"f9f157b4-58c8-4daf-81bc-87cd621d3d55\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.961386 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b32aad18-fc40-4128-96a6-b4d1b3de9cb5-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-5fld4\" (UID: \"b32aad18-fc40-4128-96a6-b4d1b3de9cb5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.962399 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnkwg\" (UniqueName: \"kubernetes.io/projected/f9f157b4-58c8-4daf-81bc-87cd621d3d55-kube-api-access-vnkwg\") pod \"service-ca-operator-5b9c976747-rbkgm\" (UID: \"f9f157b4-58c8-4daf-81bc-87cd621d3d55\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" Dec 09 14:13:53 crc kubenswrapper[5173]: I1209 14:13:53.979329 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.025247 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7r65t\" (UID: \"f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.025795 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b4546d4c-2456-4baf-98ca-9a1ca067bb14-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-sgppc\" (UID: \"b4546d4c-2456-4baf-98ca-9a1ca067bb14\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.025866 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b4546d4c-2456-4baf-98ca-9a1ca067bb14-srv-cert\") pod \"catalog-operator-75ff9f647d-sgppc\" (UID: \"b4546d4c-2456-4baf-98ca-9a1ca067bb14\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.025945 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.025978 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ab63887-5fdd-419d-a3cc-af0c227e114a-signing-key\") pod \"service-ca-74545575db-j76wj\" (UID: \"7ab63887-5fdd-419d-a3cc-af0c227e114a\") " pod="openshift-service-ca/service-ca-74545575db-j76wj" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.026031 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kmxql\" (UniqueName: \"kubernetes.io/projected/7ab63887-5fdd-419d-a3cc-af0c227e114a-kube-api-access-kmxql\") pod \"service-ca-74545575db-j76wj\" (UID: \"7ab63887-5fdd-419d-a3cc-af0c227e114a\") " pod="openshift-service-ca/service-ca-74545575db-j76wj" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.026056 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b4546d4c-2456-4baf-98ca-9a1ca067bb14-tmpfs\") pod \"catalog-operator-75ff9f647d-sgppc\" (UID: \"b4546d4c-2456-4baf-98ca-9a1ca067bb14\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.026121 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7r65t\" (UID: \"f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.026160 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjc4l\" (UniqueName: \"kubernetes.io/projected/f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966-kube-api-access-tjc4l\") pod \"machine-config-controller-f9cdd68f7-7r65t\" (UID: \"f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.026178 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nsqb\" (UniqueName: \"kubernetes.io/projected/b4546d4c-2456-4baf-98ca-9a1ca067bb14-kube-api-access-7nsqb\") pod \"catalog-operator-75ff9f647d-sgppc\" (UID: \"b4546d4c-2456-4baf-98ca-9a1ca067bb14\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.026253 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ab63887-5fdd-419d-a3cc-af0c227e114a-signing-cabundle\") pod \"service-ca-74545575db-j76wj\" (UID: \"7ab63887-5fdd-419d-a3cc-af0c227e114a\") " pod="openshift-service-ca/service-ca-74545575db-j76wj" Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.027110 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.527084232 +0000 UTC m=+117.452366519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.028071 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ab63887-5fdd-419d-a3cc-af0c227e114a-signing-cabundle\") pod \"service-ca-74545575db-j76wj\" (UID: \"7ab63887-5fdd-419d-a3cc-af0c227e114a\") " pod="openshift-service-ca/service-ca-74545575db-j76wj" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.032954 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ab63887-5fdd-419d-a3cc-af0c227e114a-signing-key\") pod \"service-ca-74545575db-j76wj\" (UID: \"7ab63887-5fdd-419d-a3cc-af0c227e114a\") " pod="openshift-service-ca/service-ca-74545575db-j76wj" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.037073 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.048913 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmxql\" (UniqueName: \"kubernetes.io/projected/7ab63887-5fdd-419d-a3cc-af0c227e114a-kube-api-access-kmxql\") pod \"service-ca-74545575db-j76wj\" (UID: \"7ab63887-5fdd-419d-a3cc-af0c227e114a\") " pod="openshift-service-ca/service-ca-74545575db-j76wj" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.062974 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-j76wj" Dec 09 14:13:54 crc kubenswrapper[5173]: W1209 14:13:54.125651 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff6c1ec3_b9f2_4b18_ad51_a8e943ae96e7.slice/crio-1c9524c9b940ad7cf068ad79987ddc2dba02051f4bd6d1eb5b9b265768773475 WatchSource:0}: Error finding container 1c9524c9b940ad7cf068ad79987ddc2dba02051f4bd6d1eb5b9b265768773475: Status 404 returned error can't find the container with id 1c9524c9b940ad7cf068ad79987ddc2dba02051f4bd6d1eb5b9b265768773475 Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.127271 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.127481 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b4546d4c-2456-4baf-98ca-9a1ca067bb14-tmpfs\") pod \"catalog-operator-75ff9f647d-sgppc\" (UID: \"b4546d4c-2456-4baf-98ca-9a1ca067bb14\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.127646 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.627574109 +0000 UTC m=+117.552856386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.127725 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7r65t\" (UID: \"f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.127770 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tjc4l\" (UniqueName: \"kubernetes.io/projected/f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966-kube-api-access-tjc4l\") pod \"machine-config-controller-f9cdd68f7-7r65t\" (UID: \"f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.127828 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7nsqb\" (UniqueName: \"kubernetes.io/projected/b4546d4c-2456-4baf-98ca-9a1ca067bb14-kube-api-access-7nsqb\") pod \"catalog-operator-75ff9f647d-sgppc\" (UID: \"b4546d4c-2456-4baf-98ca-9a1ca067bb14\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.127910 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b4546d4c-2456-4baf-98ca-9a1ca067bb14-tmpfs\") pod \"catalog-operator-75ff9f647d-sgppc\" (UID: \"b4546d4c-2456-4baf-98ca-9a1ca067bb14\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.128271 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7r65t\" (UID: \"f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.128310 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b4546d4c-2456-4baf-98ca-9a1ca067bb14-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-sgppc\" (UID: \"b4546d4c-2456-4baf-98ca-9a1ca067bb14\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.128328 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b4546d4c-2456-4baf-98ca-9a1ca067bb14-srv-cert\") pod \"catalog-operator-75ff9f647d-sgppc\" (UID: \"b4546d4c-2456-4baf-98ca-9a1ca067bb14\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.139079 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b4546d4c-2456-4baf-98ca-9a1ca067bb14-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-sgppc\" (UID: \"b4546d4c-2456-4baf-98ca-9a1ca067bb14\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.148800 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b4546d4c-2456-4baf-98ca-9a1ca067bb14-srv-cert\") pod \"catalog-operator-75ff9f647d-sgppc\" (UID: \"b4546d4c-2456-4baf-98ca-9a1ca067bb14\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.154157 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nsqb\" (UniqueName: \"kubernetes.io/projected/b4546d4c-2456-4baf-98ca-9a1ca067bb14-kube-api-access-7nsqb\") pod \"catalog-operator-75ff9f647d-sgppc\" (UID: \"b4546d4c-2456-4baf-98ca-9a1ca067bb14\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.189079 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7r65t\" (UID: \"f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.207197 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7r65t\" (UID: \"f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.207423 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjc4l\" (UniqueName: \"kubernetes.io/projected/f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966-kube-api-access-tjc4l\") pod \"machine-config-controller-f9cdd68f7-7r65t\" (UID: \"f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.230799 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.231197 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.731182004 +0000 UTC m=+117.656464251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.241727 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.331645 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.331813 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.831779115 +0000 UTC m=+117.757061362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.332436 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.332822 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.832809007 +0000 UTC m=+117.758091254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.443958 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.444301 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:54.944285197 +0000 UTC m=+117.869567444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:54 crc kubenswrapper[5173]: W1209 14:13:54.446724 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9f157b4_58c8_4daf_81bc_87cd621d3d55.slice/crio-6cbdba6bdde635827bc6a47974918dfcccd7013f424dfd4d808fb18de6b1a95f WatchSource:0}: Error finding container 6cbdba6bdde635827bc6a47974918dfcccd7013f424dfd4d808fb18de6b1a95f: Status 404 returned error can't find the container with id 6cbdba6bdde635827bc6a47974918dfcccd7013f424dfd4d808fb18de6b1a95f Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.481972 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" Dec 09 14:13:54 crc kubenswrapper[5173]: W1209 14:13:54.522125 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb32aad18_fc40_4128_96a6_b4d1b3de9cb5.slice/crio-f6fd9b35214429490fa64a50a5882e57d9c7891446179072e9ce9f1203ce3b58 WatchSource:0}: Error finding container f6fd9b35214429490fa64a50a5882e57d9c7891446179072e9ce9f1203ce3b58: Status 404 returned error can't find the container with id f6fd9b35214429490fa64a50a5882e57d9c7891446179072e9ce9f1203ce3b58 Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.546497 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.546925 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.046909781 +0000 UTC m=+117.972192028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:54 crc kubenswrapper[5173]: W1209 14:13:54.583938 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4546d4c_2456_4baf_98ca_9a1ca067bb14.slice/crio-29d1a40a31f92b258afa71b9845cc4f79a261fce923552071357c3b82b952b69 WatchSource:0}: Error finding container 29d1a40a31f92b258afa71b9845cc4f79a261fce923552071357c3b82b952b69: Status 404 returned error can't find the container with id 29d1a40a31f92b258afa71b9845cc4f79a261fce923552071357c3b82b952b69 Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.647111 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.647413 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.147379648 +0000 UTC m=+118.072661905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.678880 5173 generic.go:358] "Generic (PLEG): container finished" podID="b2ab9ef6-9c83-482d-9ea5-148c66ca62bd" containerID="f0034df578b491613f87ced6b2dc4e1b508875937532cf9345590b4acb6e09ed" exitCode=0 Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.749088 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.749212 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.249187306 +0000 UTC m=+118.174469553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.850069 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.850224 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.35019588 +0000 UTC m=+118.275478127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.850388 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.850737 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.350724157 +0000 UTC m=+118.276006404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:54 crc kubenswrapper[5173]: W1209 14:13:54.883175 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3fe3c75_5f1d_47f2_9b85_57e0ecbf8966.slice/crio-553b94f99bc23256026ebd70d9fcfd7ea034d98075afb5c2ec933a3ae23ca3bb WatchSource:0}: Error finding container 553b94f99bc23256026ebd70d9fcfd7ea034d98075afb5c2ec933a3ae23ca3bb: Status 404 returned error can't find the container with id 553b94f99bc23256026ebd70d9fcfd7ea034d98075afb5c2ec933a3ae23ca3bb Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.951458 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.951689 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.451668608 +0000 UTC m=+118.376950855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:54 crc kubenswrapper[5173]: I1209 14:13:54.951769 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:54 crc kubenswrapper[5173]: E1209 14:13:54.952125 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.452107992 +0000 UTC m=+118.377390229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.059022 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.059499 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.559471274 +0000 UTC m=+118.484753521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.144690 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tpkl8"] Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.146016 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-zmsp9" Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.148767 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.151774 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.152466 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.154729 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" event={"ID":"fb78d03e-40d5-4c32-9f47-49a596f9b55a","Type":"ContainerStarted","Data":"ed0943e30ab4b6e6898b10aa75d98d22b0e41b3d4c9b898d197df03a1889e490"} Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.154791 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" event={"ID":"19dbfec3-c944-4ab4-9b21-a1ac67840543","Type":"ContainerStarted","Data":"237431e2a6df03a4fca101b2afc5bbc733f878aeebe3349b23611782303395fc"} Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.154811 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58"] Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.154828 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" event={"ID":"4751d5a1-9958-4f4f-aa73-a94b587a09b7","Type":"ContainerStarted","Data":"1dba72aa6716362ddf3006bbd7ef572748927c7f075282fff51a5c6b6e1233b5"} Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.154842 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" event={"ID":"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd","Type":"ContainerStarted","Data":"45470d1b2ca947d28bbf1ae10ad736024100f79218e1d2def4be924e11797f04"} Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.154853 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" event={"ID":"eb0c4171-4c7a-4d9c-a467-47895e7dca09","Type":"ContainerStarted","Data":"5b770dfb9be2208de122cff4db90e06f15dd38bb2c7277992cb6fcf23e8dc7c0"} Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.154866 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k"] Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.160877 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.160945 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzwh7\" (UniqueName: \"kubernetes.io/projected/f2962534-956e-497a-89af-1b5d39a61c84-kube-api-access-hzwh7\") pod \"migrator-866fcbc849-zmsp9\" (UID: \"f2962534-956e-497a-89af-1b5d39a61c84\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-zmsp9" Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.161228 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.66120955 +0000 UTC m=+118.586491797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.261644 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.261865 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.761845742 +0000 UTC m=+118.687127979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.261934 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hzwh7\" (UniqueName: \"kubernetes.io/projected/f2962534-956e-497a-89af-1b5d39a61c84-kube-api-access-hzwh7\") pod \"migrator-866fcbc849-zmsp9\" (UID: \"f2962534-956e-497a-89af-1b5d39a61c84\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-zmsp9" Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.261999 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.262295 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.762287986 +0000 UTC m=+118.687570233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.363263 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.363545 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.863499166 +0000 UTC m=+118.788781413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.363736 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.364107 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.864085165 +0000 UTC m=+118.789367412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.411303 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzwh7\" (UniqueName: \"kubernetes.io/projected/f2962534-956e-497a-89af-1b5d39a61c84-kube-api-access-hzwh7\") pod \"migrator-866fcbc849-zmsp9\" (UID: \"f2962534-956e-497a-89af-1b5d39a61c84\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-zmsp9" Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.464601 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.464792 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.964763728 +0000 UTC m=+118.890045975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.465024 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.465526 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:55.965520041 +0000 UTC m=+118.890802288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.467541 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-zmsp9" Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.566707 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.567397 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:56.067376252 +0000 UTC m=+118.992658499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.668641 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.669087 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:56.169065917 +0000 UTC m=+119.094348204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: W1209 14:13:55.725112 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2962534_956e_497a_89af_1b5d39a61c84.slice/crio-550221c63bb8f28262a9b1db11e4ab04e6eca910c0226ce47a4cecee5094cf42 WatchSource:0}: Error finding container 550221c63bb8f28262a9b1db11e4ab04e6eca910c0226ce47a4cecee5094cf42: Status 404 returned error can't find the container with id 550221c63bb8f28262a9b1db11e4ab04e6eca910c0226ce47a4cecee5094cf42 Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.769918 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.770004 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:56.269985918 +0000 UTC m=+119.195268165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.770200 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.770511 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:56.270503304 +0000 UTC m=+119.195785551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.871476 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.871676 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:56.371651431 +0000 UTC m=+119.296933678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.871936 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.872254 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:56.37224517 +0000 UTC m=+119.297527417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:55 crc kubenswrapper[5173]: I1209 14:13:55.973173 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:55 crc kubenswrapper[5173]: E1209 14:13:55.973405 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:56.473341367 +0000 UTC m=+119.398623614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.075245 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:56 crc kubenswrapper[5173]: E1209 14:13:56.075680 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:56.575654512 +0000 UTC m=+119.500936759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.176558 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:56 crc kubenswrapper[5173]: E1209 14:13:56.176839 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:56.676818289 +0000 UTC m=+119.602100536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.183718 5173 patch_prober.go:28] interesting pod/downloads-747b44746d-zhlr7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.183794 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zhlr7" podUID="6794662c-7933-4e08-870f-c44892aef039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.278707 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:56 crc kubenswrapper[5173]: E1209 14:13:56.279109 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:56.779092953 +0000 UTC m=+119.704375200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.381709 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:56 crc kubenswrapper[5173]: E1209 14:13:56.381942 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:56.881920793 +0000 UTC m=+119.807203040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.434477 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"05b4f16478ce91bb02ea1b7910b0d2f33ece3a4b65c9eaf5207c97e99922d239"} Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.434633 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.438963 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.439189 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.439739 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.444251 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7"] Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.483391 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/59e9e420-971a-4d09-80f7-1039326724b8-images\") pod \"machine-config-operator-67c9d58cbb-sk58k\" (UID: \"59e9e420-971a-4d09-80f7-1039326724b8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.483463 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/59e9e420-971a-4d09-80f7-1039326724b8-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-sk58k\" (UID: \"59e9e420-971a-4d09-80f7-1039326724b8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.483491 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5jpj\" (UniqueName: \"kubernetes.io/projected/59e9e420-971a-4d09-80f7-1039326724b8-kube-api-access-w5jpj\") pod \"machine-config-operator-67c9d58cbb-sk58k\" (UID: \"59e9e420-971a-4d09-80f7-1039326724b8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.483575 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.483601 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/59e9e420-971a-4d09-80f7-1039326724b8-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-sk58k\" (UID: \"59e9e420-971a-4d09-80f7-1039326724b8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: E1209 14:13:56.484209 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:56.984193467 +0000 UTC m=+119.909475784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.584369 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.584570 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/59e9e420-971a-4d09-80f7-1039326724b8-images\") pod \"machine-config-operator-67c9d58cbb-sk58k\" (UID: \"59e9e420-971a-4d09-80f7-1039326724b8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.584612 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/59e9e420-971a-4d09-80f7-1039326724b8-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-sk58k\" (UID: \"59e9e420-971a-4d09-80f7-1039326724b8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.584630 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w5jpj\" (UniqueName: \"kubernetes.io/projected/59e9e420-971a-4d09-80f7-1039326724b8-kube-api-access-w5jpj\") pod \"machine-config-operator-67c9d58cbb-sk58k\" (UID: \"59e9e420-971a-4d09-80f7-1039326724b8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.584668 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/59e9e420-971a-4d09-80f7-1039326724b8-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-sk58k\" (UID: \"59e9e420-971a-4d09-80f7-1039326724b8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: E1209 14:13:56.585083 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:57.085052076 +0000 UTC m=+120.010334323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.585600 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/59e9e420-971a-4d09-80f7-1039326724b8-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-sk58k\" (UID: \"59e9e420-971a-4d09-80f7-1039326724b8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.585634 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/59e9e420-971a-4d09-80f7-1039326724b8-images\") pod \"machine-config-operator-67c9d58cbb-sk58k\" (UID: \"59e9e420-971a-4d09-80f7-1039326724b8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.600037 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/59e9e420-971a-4d09-80f7-1039326724b8-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-sk58k\" (UID: \"59e9e420-971a-4d09-80f7-1039326724b8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.614117 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5jpj\" (UniqueName: \"kubernetes.io/projected/59e9e420-971a-4d09-80f7-1039326724b8-kube-api-access-w5jpj\") pod \"machine-config-operator-67c9d58cbb-sk58k\" (UID: \"59e9e420-971a-4d09-80f7-1039326724b8\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.685968 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:56 crc kubenswrapper[5173]: E1209 14:13:56.686289 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:57.186272165 +0000 UTC m=+120.111554412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.760666 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.786882 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:56 crc kubenswrapper[5173]: E1209 14:13:56.787287 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:57.287266169 +0000 UTC m=+120.212548416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.863693 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-pkw8g"] Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.863930 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" event={"ID":"d9234899-46cc-4f8d-bfe6-a65d9532ba16","Type":"ContainerStarted","Data":"771bdbde0ec55234aef846ae20d0e7f680982c3a16c51e05dc8c73f3d1f6ba99"} Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.863987 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" event={"ID":"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c","Type":"ContainerStarted","Data":"08536214d4007a00fe34969a2c88ec5d63d0c5b3bfbeb1cc295271a525c9265f"} Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.864002 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"bbb22c094946ed0b3f8f8324762677869d17fa3fe97d2e5be1e0278a546e402c"} Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.864020 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-54cg5"] Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.864074 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp"] Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.864409 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.870499 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.870841 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.888881 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddtnx\" (UniqueName: \"kubernetes.io/projected/bd366c79-af35-434f-9179-c5ecf3974dd8-kube-api-access-ddtnx\") pod \"control-plane-machine-set-operator-75ffdb6fcd-jf4h7\" (UID: \"bd366c79-af35-434f-9179-c5ecf3974dd8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.889001 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd366c79-af35-434f-9179-c5ecf3974dd8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-jf4h7\" (UID: \"bd366c79-af35-434f-9179-c5ecf3974dd8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.889161 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:56 crc kubenswrapper[5173]: E1209 14:13:56.889638 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:57.389623325 +0000 UTC m=+120.314905642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.992863 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.993233 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ddtnx\" (UniqueName: \"kubernetes.io/projected/bd366c79-af35-434f-9179-c5ecf3974dd8-kube-api-access-ddtnx\") pod \"control-plane-machine-set-operator-75ffdb6fcd-jf4h7\" (UID: \"bd366c79-af35-434f-9179-c5ecf3974dd8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7" Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.993260 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd366c79-af35-434f-9179-c5ecf3974dd8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-jf4h7\" (UID: \"bd366c79-af35-434f-9179-c5ecf3974dd8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7" Dec 09 14:13:56 crc kubenswrapper[5173]: E1209 14:13:56.994656 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:57.494623013 +0000 UTC m=+120.419905260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:56 crc kubenswrapper[5173]: I1209 14:13:56.998691 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd366c79-af35-434f-9179-c5ecf3974dd8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-jf4h7\" (UID: \"bd366c79-af35-434f-9179-c5ecf3974dd8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.036378 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddtnx\" (UniqueName: \"kubernetes.io/projected/bd366c79-af35-434f-9179-c5ecf3974dd8-kube-api-access-ddtnx\") pod \"control-plane-machine-set-operator-75ffdb6fcd-jf4h7\" (UID: \"bd366c79-af35-434f-9179-c5ecf3974dd8\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.094009 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:57 crc kubenswrapper[5173]: E1209 14:13:57.094468 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:57.59444648 +0000 UTC m=+120.519728737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.195608 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:57 crc kubenswrapper[5173]: E1209 14:13:57.195743 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:57.695725062 +0000 UTC m=+120.621007309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.195978 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:57 crc kubenswrapper[5173]: E1209 14:13:57.196291 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:57.696284459 +0000 UTC m=+120.621566706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.198283 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.277123 5173 patch_prober.go:28] interesting pod/downloads-747b44746d-zhlr7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.277431 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-zhlr7" podUID="6794662c-7933-4e08-870f-c44892aef039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.297718 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:57 crc kubenswrapper[5173]: E1209 14:13:57.297955 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:57.797925773 +0000 UTC m=+120.723208020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.298931 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:57 crc kubenswrapper[5173]: E1209 14:13:57.299432 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:57.799415979 +0000 UTC m=+120.724698226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.348081 5173 patch_prober.go:28] interesting pod/console-64d44f6ddf-q5kgl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.348170 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-q5kgl" podUID="a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.400526 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:57 crc kubenswrapper[5173]: E1209 14:13:57.400792 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:57.900768343 +0000 UTC m=+120.826050590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:57 crc kubenswrapper[5173]: W1209 14:13:57.423699 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd366c79_af35_434f_9179_c5ecf3974dd8.slice/crio-b9465abb0951418f5044f68cc9ce91f91196974f5a08d7d6f9b12acb9f07dfc2 WatchSource:0}: Error finding container b9465abb0951418f5044f68cc9ce91f91196974f5a08d7d6f9b12acb9f07dfc2: Status 404 returned error can't find the container with id b9465abb0951418f5044f68cc9ce91f91196974f5a08d7d6f9b12acb9f07dfc2 Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.505592 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:57 crc kubenswrapper[5173]: E1209 14:13:57.506037 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:58.00601958 +0000 UTC m=+120.931301827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.606992 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:57 crc kubenswrapper[5173]: E1209 14:13:57.607150 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:58.107123316 +0000 UTC m=+121.032405563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.607267 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:57 crc kubenswrapper[5173]: E1209 14:13:57.607945 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:58.107931831 +0000 UTC m=+121.033214078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.608340 5173 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-pkw8g container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.608399 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" podUID="76317343-bf5b-441f-ae79-e09f3d1188cd" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.639254 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-lh94q" event={"ID":"4d271279-fdf8-48d7-b1d8-1b05fee604d4","Type":"ContainerStarted","Data":"a5b0f34a4bee6af81cdaeea6335bc3b74b3bca8a15483c7386234675a7d551ee"} Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.639330 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.639367 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p"] Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.639515 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.639744 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-57k5h"] Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.639899 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.639987 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-zhlr7"] Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.640060 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.640158 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-twcnj"] Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.640735 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" podStartSLOduration=95.640720211 podStartE2EDuration="1m35.640720211s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:56.944818033 +0000 UTC m=+119.870100300" watchObservedRunningTime="2025-12-09 14:13:57.640720211 +0000 UTC m=+120.566002468" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.640824 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" podStartSLOduration=95.640817284 podStartE2EDuration="1m35.640817284s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:56.88626405 +0000 UTC m=+119.811546317" watchObservedRunningTime="2025-12-09 14:13:57.640817284 +0000 UTC m=+120.566099531" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.641593 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" podStartSLOduration=95.641585309 podStartE2EDuration="1m35.641585309s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:56.924530001 +0000 UTC m=+119.849812258" watchObservedRunningTime="2025-12-09 14:13:57.641585309 +0000 UTC m=+120.566867556" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.643581 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.643933 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.708038 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.708989 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9przc\" (UniqueName: \"kubernetes.io/projected/b00790a6-0331-44bd-9ddb-10d0598d5d74-kube-api-access-9przc\") pod \"collect-profiles-29421480-qssgp\" (UID: \"b00790a6-0331-44bd-9ddb-10d0598d5d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.709048 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00790a6-0331-44bd-9ddb-10d0598d5d74-config-volume\") pod \"collect-profiles-29421480-qssgp\" (UID: \"b00790a6-0331-44bd-9ddb-10d0598d5d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.709195 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b00790a6-0331-44bd-9ddb-10d0598d5d74-secret-volume\") pod \"collect-profiles-29421480-qssgp\" (UID: \"b00790a6-0331-44bd-9ddb-10d0598d5d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:13:57 crc kubenswrapper[5173]: E1209 14:13:57.710919 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:58.210890605 +0000 UTC m=+121.136172862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.813514 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9przc\" (UniqueName: \"kubernetes.io/projected/b00790a6-0331-44bd-9ddb-10d0598d5d74-kube-api-access-9przc\") pod \"collect-profiles-29421480-qssgp\" (UID: \"b00790a6-0331-44bd-9ddb-10d0598d5d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.813668 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00790a6-0331-44bd-9ddb-10d0598d5d74-config-volume\") pod \"collect-profiles-29421480-qssgp\" (UID: \"b00790a6-0331-44bd-9ddb-10d0598d5d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.813827 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.813985 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b00790a6-0331-44bd-9ddb-10d0598d5d74-secret-volume\") pod \"collect-profiles-29421480-qssgp\" (UID: \"b00790a6-0331-44bd-9ddb-10d0598d5d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:13:57 crc kubenswrapper[5173]: E1209 14:13:57.814699 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:58.314681506 +0000 UTC m=+121.239963763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.815665 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00790a6-0331-44bd-9ddb-10d0598d5d74-config-volume\") pod \"collect-profiles-29421480-qssgp\" (UID: \"b00790a6-0331-44bd-9ddb-10d0598d5d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.830408 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b00790a6-0331-44bd-9ddb-10d0598d5d74-secret-volume\") pod \"collect-profiles-29421480-qssgp\" (UID: \"b00790a6-0331-44bd-9ddb-10d0598d5d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.830794 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.830902 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lbnx5" event={"ID":"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b","Type":"ContainerStarted","Data":"254f0a8adbdb0499b1631141d51439e208418f8c788f91cdf5f31012962cb8c9"} Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.831012 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"a646111d188c4765a5a232c88abfd0c1261f8b54558bdb02ca5746bee8dc0b90"} Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.834848 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv"] Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.835244 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" event={"ID":"683a6416-7033-4896-9e1e-be8b31f74d38","Type":"ContainerStarted","Data":"d89939e79a43b454da152cad671cb97056ea6229a825405971e8ca6bd0e96933"} Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.835396 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.835482 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-q5kgl"] Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.835563 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z"] Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.835637 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"65f65f957a733b56b0d573e9d3c82a88852af721c4b3fb4d01d14b8f06f433a9"} Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.835731 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-4nhj5"] Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.831077 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.839206 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" podStartSLOduration=96.839179439 podStartE2EDuration="1m36.839179439s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:57.696679203 +0000 UTC m=+120.621961450" watchObservedRunningTime="2025-12-09 14:13:57.839179439 +0000 UTC m=+120.764461686" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.839449 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" podStartSLOduration=96.839444467 podStartE2EDuration="1m36.839444467s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:57.773295658 +0000 UTC m=+120.698577925" watchObservedRunningTime="2025-12-09 14:13:57.839444467 +0000 UTC m=+120.764726714" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.840017 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" podStartSLOduration=95.840013164 podStartE2EDuration="1m35.840013164s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:57.749160007 +0000 UTC m=+120.674442274" watchObservedRunningTime="2025-12-09 14:13:57.840013164 +0000 UTC m=+120.765295411" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.853019 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.870938 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.872154 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9przc\" (UniqueName: \"kubernetes.io/projected/b00790a6-0331-44bd-9ddb-10d0598d5d74-kube-api-access-9przc\") pod \"collect-profiles-29421480-qssgp\" (UID: \"b00790a6-0331-44bd-9ddb-10d0598d5d74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.882706 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.915889 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.916036 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-registration-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.916085 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-plugins-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.916156 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-mountpoint-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.916202 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-csi-data-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.916220 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zfs2\" (UniqueName: \"kubernetes.io/projected/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-kube-api-access-8zfs2\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:57 crc kubenswrapper[5173]: I1209 14:13:57.916244 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-socket-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:57 crc kubenswrapper[5173]: E1209 14:13:57.916367 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:58.41633586 +0000 UTC m=+121.341618107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.010044 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.017193 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-csi-data-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.017527 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8zfs2\" (UniqueName: \"kubernetes.io/projected/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-kube-api-access-8zfs2\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.018005 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-socket-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.018460 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-registration-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.018653 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-plugins-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.018831 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.018610 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-registration-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.018390 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-socket-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.018776 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-plugins-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.017478 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-csi-data-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: E1209 14:13:58.019183 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:58.51916837 +0000 UTC m=+121.444450617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.022052 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-mountpoint-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.022227 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-mountpoint-dir\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.043123 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zfs2\" (UniqueName: \"kubernetes.io/projected/55b770c0-e50a-4a1e-b711-5e87b1a4cc3d-kube-api-access-8zfs2\") pod \"csi-hostpathplugin-twcnj\" (UID: \"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d\") " pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.102193 5173 patch_prober.go:28] interesting pod/downloads-747b44746d-zhlr7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.102260 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zhlr7" podUID="6794662c-7933-4e08-870f-c44892aef039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.114135 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4nhj5" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.116291 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.118135 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.121703 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.122949 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:58 crc kubenswrapper[5173]: E1209 14:13:58.123197 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:58.623177818 +0000 UTC m=+121.548460065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133442 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133490 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" event={"ID":"fb78d03e-40d5-4c32-9f47-49a596f9b55a","Type":"ContainerStarted","Data":"e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133530 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"b7b3f6978d0f7766495d4fb3af781866fa61def98aab9c80dc0a831a3ae3c792"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133546 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-znppb"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133561 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z" event={"ID":"19dbfec3-c944-4ab4-9b21-a1ac67840543","Type":"ContainerStarted","Data":"a2eebddce9484dc24fc8bdcb70abc6e08720511cd91f0e778c45b3296af5d4ec"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133573 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" event={"ID":"41f63208-f276-4c44-ad67-77446cad7193","Type":"ContainerStarted","Data":"855948aa202f9237b0bdd4fd805367fe3602245ba68230b6e16b0acf9a8caa45"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133585 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58" event={"ID":"5263b977-f1d9-4b01-9cd3-25a488d46ac7","Type":"ContainerStarted","Data":"fd2af776ca2f78414a4f997fa242eec846d5123bc94726103ba7999c8d9e5c61"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133597 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-z495l"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133609 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133620 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-j76wj"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133630 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133640 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133652 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"de0a421471efe83597af58214552749f96bc6df60621f76f408747f9807938f8"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133665 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-qjcxb"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133678 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" event={"ID":"139a1ff9-4912-4a2c-b0d2-c220452ab9f2","Type":"ContainerStarted","Data":"ebf334098c36f8ad54d0d77024beb1bf20f9762ea1dc933dc42606ec98de2d5e"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133690 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133703 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-rqxjt"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133714 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" event={"ID":"07abd9d6-5952-41d9-aea4-ae02adf03b84","Type":"ContainerStarted","Data":"c4bde551a8680370852ad3cca6151ad78a061e6b3408fe3cacaf65ba423facdb"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133726 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498" event={"ID":"d9234899-46cc-4f8d-bfe6-a65d9532ba16","Type":"ContainerStarted","Data":"5eed6675373fc9158b881033a40246869103e63aa89b03fc58301177b7f652b0"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133738 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk" event={"ID":"5baa9a3d-ae8f-4ff7-abcb-e831745d4e0c","Type":"ContainerStarted","Data":"7de37eb42aaa64c7885e6d9b7965f5aec0505abbb2ede6b39449fbb168382e45"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133750 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" event={"ID":"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd","Type":"ContainerDied","Data":"f0034df578b491613f87ced6b2dc4e1b508875937532cf9345590b4acb6e09ed"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133765 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9" event={"ID":"eb0c4171-4c7a-4d9c-a467-47895e7dca09","Type":"ContainerStarted","Data":"98b6baf02a86349cc69d3de4e081b4b0b6a66b52db26ea82bf8a2877f2c07f32"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133776 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" event={"ID":"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58","Type":"ContainerStarted","Data":"a8b2827f0be5587fb974069fbe3b108d473b4a68e5c90dba2074636998de52ed"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133787 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" event={"ID":"36b504f1-6aae-4802-ab5d-ce89caf2f742","Type":"ContainerStarted","Data":"bc42dda00a7f508e88841817e7d826c38808dc85d57dfbf803cc34dbeed98380"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133798 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-z495l" event={"ID":"5e8f532f-b948-4468-9397-7318c60c6fa8","Type":"ContainerStarted","Data":"aedcb86f6a1af721ef0228dbb94c36ba3281e9f41a45afbc5b1879267aa06b13"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133809 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-xtwzt"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133819 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-xtwzt" event={"ID":"07abd9d6-5952-41d9-aea4-ae02adf03b84","Type":"ContainerStarted","Data":"ff3f26447afe6faa77418a5aae460af36e419a93df9301c244349b537a8fc508"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133831 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133847 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-rqxjt" event={"ID":"99788f65-7403-4cb0-91bb-f318172f7171","Type":"ContainerStarted","Data":"e831bcbf76390653ae9caec8ce53731e4840dc1bf17cc1f843468a82036330ee"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133862 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z9d5g"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133874 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-lh94q"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133885 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133897 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-twcnj"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133909 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-j76wj" event={"ID":"7ab63887-5fdd-419d-a3cc-af0c227e114a","Type":"ContainerStarted","Data":"3a99017c998ccc6db1121d460ad991e5ce28978b0055d5250036646c454ddce0"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133920 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133933 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" event={"ID":"7ff9b667-97da-48d5-85b6-7c02806cc6c6","Type":"ContainerStarted","Data":"8fc5a5912c6c444cc67800d7b901140b080a655d2a0c24274febaf9a7669c412"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133944 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" event={"ID":"d171fe05-fe49-46fb-9407-bdc1f9272d4b","Type":"ContainerStarted","Data":"cf78088fcbf995dd440b5a38e6c7b70e40c43df97d91d40a47449c204bb78e3c"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133955 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.133967 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5f78n"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.178501 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-twcnj" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.224927 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.224996 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/54ba16c7-dd59-4faa-9932-7998a5377969-certs\") pod \"machine-config-server-4nhj5\" (UID: \"54ba16c7-dd59-4faa-9932-7998a5377969\") " pod="openshift-machine-config-operator/machine-config-server-4nhj5" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.225211 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/54ba16c7-dd59-4faa-9932-7998a5377969-node-bootstrap-token\") pod \"machine-config-server-4nhj5\" (UID: \"54ba16c7-dd59-4faa-9932-7998a5377969\") " pod="openshift-machine-config-operator/machine-config-server-4nhj5" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.225343 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gpzv\" (UniqueName: \"kubernetes.io/projected/54ba16c7-dd59-4faa-9932-7998a5377969-kube-api-access-5gpzv\") pod \"machine-config-server-4nhj5\" (UID: \"54ba16c7-dd59-4faa-9932-7998a5377969\") " pod="openshift-machine-config-operator/machine-config-server-4nhj5" Dec 09 14:13:58 crc kubenswrapper[5173]: E1209 14:13:58.227381 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:58.72734205 +0000 UTC m=+121.652624477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.229409 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kfj8k"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.244122 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5f78n" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.249160 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.249387 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.249442 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.249584 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322271 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" event={"ID":"0917873e-8059-49a3-aec4-f2b5152fc356","Type":"ContainerStarted","Data":"246e5fa5d79f3c991cfbea94958259d10f7d487ca58a277852b7190643b81da0"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322323 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" event={"ID":"76317343-bf5b-441f-ae79-e09f3d1188cd","Type":"ContainerStarted","Data":"7911b8ce3277012fc26d4e22024b68b03b0fad5e0f3d6495863b58aa179323c6"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322557 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" event={"ID":"b4546d4c-2456-4baf-98ca-9a1ca067bb14","Type":"ContainerStarted","Data":"29d1a40a31f92b258afa71b9845cc4f79a261fce923552071357c3b82b952b69"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322589 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322608 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" event={"ID":"b32aad18-fc40-4128-96a6-b4d1b3de9cb5","Type":"ContainerStarted","Data":"f6fd9b35214429490fa64a50a5882e57d9c7891446179072e9ce9f1203ce3b58"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322652 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322679 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" event={"ID":"f9f157b4-58c8-4daf-81bc-87cd621d3d55","Type":"ContainerStarted","Data":"6cbdba6bdde635827bc6a47974918dfcccd7013f424dfd4d808fb18de6b1a95f"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322694 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" event={"ID":"84c3a797-e34a-463b-b598-7b75849c651b","Type":"ContainerStarted","Data":"da094f71e220e747a991c2a1ab11c9b64e257042f36aa70742565a06b10819d3"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322712 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322729 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322760 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lbnx5" event={"ID":"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b","Type":"ContainerStarted","Data":"f68a81954604469f510c2cc3c6f23dee2e335119d664b76a091e523841de25fe"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322776 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322788 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322861 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322878 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322891 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" event={"ID":"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7","Type":"ContainerStarted","Data":"1c9524c9b940ad7cf068ad79987ddc2dba02051f4bd6d1eb5b9b265768773475"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322905 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" event={"ID":"03517711-3312-4a4f-8ede-4d39051bd092","Type":"ContainerStarted","Data":"dd15e2d1f547457c7b3ddffc7ecbc67c0e245f2435c12d07eb495a27e3118631"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322918 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322932 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" event={"ID":"d2105b69-54d7-4854-ba11-9108ad09016d","Type":"ContainerStarted","Data":"6fca930b3c5b50a80b8361329b380a5a413500a0634905bafaf7eb3dda81f9c3"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322945 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" event={"ID":"f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966","Type":"ContainerStarted","Data":"553b94f99bc23256026ebd70d9fcfd7ea034d98075afb5c2ec933a3ae23ca3bb"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322958 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-zmsp9"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322982 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lbnx5" event={"ID":"5d73c2ad-08e4-439f-8c5f-adb67b27ef4b","Type":"ContainerStarted","Data":"8aeea7baa1a46980a6ce81c2e76904530b564fa0ec1483234279702e44103b51"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.322995 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.323007 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" event={"ID":"03517711-3312-4a4f-8ede-4d39051bd092","Type":"ContainerStarted","Data":"ea6b384e7f6a0981652968e6bd250265613ca3e10248f2f48e8d17682c6a5cde"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.323019 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" event={"ID":"41f63208-f276-4c44-ad67-77446cad7193","Type":"ContainerStarted","Data":"735c9ca999a6366c439455e96f4aea461958eea633a800c232fda00ebc4d223d"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.323034 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5f78n"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.323047 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-zmsp9" event={"ID":"f2962534-956e-497a-89af-1b5d39a61c84","Type":"ContainerStarted","Data":"550221c63bb8f28262a9b1db11e4ab04e6eca910c0226ce47a4cecee5094cf42"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.323068 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kfj8k"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.323083 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7j5wv"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.326259 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.330728 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.332675 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.333387 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/54ba16c7-dd59-4faa-9932-7998a5377969-certs\") pod \"machine-config-server-4nhj5\" (UID: \"54ba16c7-dd59-4faa-9932-7998a5377969\") " pod="openshift-machine-config-operator/machine-config-server-4nhj5" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.334819 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/54ba16c7-dd59-4faa-9932-7998a5377969-node-bootstrap-token\") pod \"machine-config-server-4nhj5\" (UID: \"54ba16c7-dd59-4faa-9932-7998a5377969\") " pod="openshift-machine-config-operator/machine-config-server-4nhj5" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.334905 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46276f4a-6b89-4791-b0a5-820978009c5e-cert\") pod \"ingress-canary-5f78n\" (UID: \"46276f4a-6b89-4791-b0a5-820978009c5e\") " pod="openshift-ingress-canary/ingress-canary-5f78n" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.335035 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5gpzv\" (UniqueName: \"kubernetes.io/projected/54ba16c7-dd59-4faa-9932-7998a5377969-kube-api-access-5gpzv\") pod \"machine-config-server-4nhj5\" (UID: \"54ba16c7-dd59-4faa-9932-7998a5377969\") " pod="openshift-machine-config-operator/machine-config-server-4nhj5" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.335098 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpvtr\" (UniqueName: \"kubernetes.io/projected/46276f4a-6b89-4791-b0a5-820978009c5e-kube-api-access-wpvtr\") pod \"ingress-canary-5f78n\" (UID: \"46276f4a-6b89-4791-b0a5-820978009c5e\") " pod="openshift-ingress-canary/ingress-canary-5f78n" Dec 09 14:13:58 crc kubenswrapper[5173]: E1209 14:13:58.337243 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:58.83721716 +0000 UTC m=+121.762499407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.338080 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.342912 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.362003 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/54ba16c7-dd59-4faa-9932-7998a5377969-certs\") pod \"machine-config-server-4nhj5\" (UID: \"54ba16c7-dd59-4faa-9932-7998a5377969\") " pod="openshift-machine-config-operator/machine-config-server-4nhj5" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.379999 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/54ba16c7-dd59-4faa-9932-7998a5377969-node-bootstrap-token\") pod \"machine-config-server-4nhj5\" (UID: \"54ba16c7-dd59-4faa-9932-7998a5377969\") " pod="openshift-machine-config-operator/machine-config-server-4nhj5" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.386726 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gpzv\" (UniqueName: \"kubernetes.io/projected/54ba16c7-dd59-4faa-9932-7998a5377969-kube-api-access-5gpzv\") pod \"machine-config-server-4nhj5\" (UID: \"54ba16c7-dd59-4faa-9932-7998a5377969\") " pod="openshift-machine-config-operator/machine-config-server-4nhj5" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.430801 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4nhj5" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445472 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445532 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34ce85ee-5f93-46ea-a866-72bb238285ff-metrics-tls\") pod \"dns-default-kfj8k\" (UID: \"34ce85ee-5f93-46ea-a866-72bb238285ff\") " pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445588 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/34ce85ee-5f93-46ea-a866-72bb238285ff-tmp-dir\") pod \"dns-default-kfj8k\" (UID: \"34ce85ee-5f93-46ea-a866-72bb238285ff\") " pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445675 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445519 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" event={"ID":"139a1ff9-4912-4a2c-b0d2-c220452ab9f2","Type":"ContainerStarted","Data":"70c83f397874f1398b358e0f55af2f3b26b770995ee5acd63e001169a7577121"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445837 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" event={"ID":"36b504f1-6aae-4802-ab5d-ce89caf2f742","Type":"ContainerStarted","Data":"2befee708453c170540b24c01ae539674f19508c0fea512e58573741e8dd92ef"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445875 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" event={"ID":"7ff9b667-97da-48d5-85b6-7c02806cc6c6","Type":"ContainerStarted","Data":"80f2e5f025aed5dca7a236b578d62e4cb8a12044ea077482d5b066a0988d3027"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445898 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" event={"ID":"683a6416-7033-4896-9e1e-be8b31f74d38","Type":"ContainerStarted","Data":"7f2b42c1c4b2a6c970e10f6762f4bc2359d04014e07b5e1960b1fe8e5531e9ab"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445914 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-z495l" event={"ID":"5e8f532f-b948-4468-9397-7318c60c6fa8","Type":"ContainerStarted","Data":"9427278c2b81383c9f3cf4c0f1800a8f6f9ea03e2c3630bf8eb5bdebf343c6bc"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445932 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445945 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-pkw8g"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445957 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-rqxjt" event={"ID":"99788f65-7403-4cb0-91bb-f318172f7171","Type":"ContainerStarted","Data":"a0698b30ec2433ea4ada08f40851954cac33445475b42c4fa6172cad51e65232"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445971 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" event={"ID":"59e9e420-971a-4d09-80f7-1039326724b8","Type":"ContainerStarted","Data":"5d1b573ed78c79ba8485d05a5eb1aa09beae44d8c110d45e7dbbb9cd8f357f66"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445984 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7" event={"ID":"bd366c79-af35-434f-9179-c5ecf3974dd8","Type":"ContainerStarted","Data":"b9465abb0951418f5044f68cc9ce91f91196974f5a08d7d6f9b12acb9f07dfc2"} Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445997 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-zhlr7"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446009 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-q5kgl"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446060 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-57k5h"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446100 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-ftc7p"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446114 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-54cg5"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446124 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-lh94q"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446133 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-dj8z9"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446144 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-xtwzt"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446163 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-znppb"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446172 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-2wr9z"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446191 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-n4pnk"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446212 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lbnx5"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446227 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446236 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-df498"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.445673 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46276f4a-6b89-4791-b0a5-820978009c5e-cert\") pod \"ingress-canary-5f78n\" (UID: \"46276f4a-6b89-4791-b0a5-820978009c5e\") " pod="openshift-ingress-canary/ingress-canary-5f78n" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446514 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34ce85ee-5f93-46ea-a866-72bb238285ff-config-volume\") pod \"dns-default-kfj8k\" (UID: \"34ce85ee-5f93-46ea-a866-72bb238285ff\") " pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446568 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wpvtr\" (UniqueName: \"kubernetes.io/projected/46276f4a-6b89-4791-b0a5-820978009c5e-kube-api-access-wpvtr\") pod \"ingress-canary-5f78n\" (UID: \"46276f4a-6b89-4791-b0a5-820978009c5e\") " pod="openshift-ingress-canary/ingress-canary-5f78n" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.446619 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lnct\" (UniqueName: \"kubernetes.io/projected/34ce85ee-5f93-46ea-a866-72bb238285ff-kube-api-access-8lnct\") pod \"dns-default-kfj8k\" (UID: \"34ce85ee-5f93-46ea-a866-72bb238285ff\") " pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.447844 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 09 14:13:58 crc kubenswrapper[5173]: E1209 14:13:58.447909 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:58.947891904 +0000 UTC m=+121.873174191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.455175 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/46276f4a-6b89-4791-b0a5-820978009c5e-cert\") pod \"ingress-canary-5f78n\" (UID: \"46276f4a-6b89-4791-b0a5-820978009c5e\") " pod="openshift-ingress-canary/ingress-canary-5f78n" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.470139 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.471787 5173 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-ppjzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.471852 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" podUID="36b504f1-6aae-4802-ab5d-ce89caf2f742" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.483638 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpvtr\" (UniqueName: \"kubernetes.io/projected/46276f4a-6b89-4791-b0a5-820978009c5e-kube-api-access-wpvtr\") pod \"ingress-canary-5f78n\" (UID: \"46276f4a-6b89-4791-b0a5-820978009c5e\") " pod="openshift-ingress-canary/ingress-canary-5f78n" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.520482 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.535471 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-qjcxb"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.547755 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:58 crc kubenswrapper[5173]: E1209 14:13:58.547904 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:59.047877896 +0000 UTC m=+121.973160143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.547976 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34ce85ee-5f93-46ea-a866-72bb238285ff-config-volume\") pod \"dns-default-kfj8k\" (UID: \"34ce85ee-5f93-46ea-a866-72bb238285ff\") " pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.548041 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8lnct\" (UniqueName: \"kubernetes.io/projected/34ce85ee-5f93-46ea-a866-72bb238285ff-kube-api-access-8lnct\") pod \"dns-default-kfj8k\" (UID: \"34ce85ee-5f93-46ea-a866-72bb238285ff\") " pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.548084 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc4pj\" (UniqueName: \"kubernetes.io/projected/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-kube-api-access-gc4pj\") pod \"cni-sysctl-allowlist-ds-7j5wv\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.548172 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-ready\") pod \"cni-sysctl-allowlist-ds-7j5wv\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.548204 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7j5wv\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.548233 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.548296 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34ce85ee-5f93-46ea-a866-72bb238285ff-metrics-tls\") pod \"dns-default-kfj8k\" (UID: \"34ce85ee-5f93-46ea-a866-72bb238285ff\") " pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.548366 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/34ce85ee-5f93-46ea-a866-72bb238285ff-tmp-dir\") pod \"dns-default-kfj8k\" (UID: \"34ce85ee-5f93-46ea-a866-72bb238285ff\") " pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.548469 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7j5wv\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: E1209 14:13:58.548895 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:59.048881557 +0000 UTC m=+121.974163804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.549752 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34ce85ee-5f93-46ea-a866-72bb238285ff-config-volume\") pod \"dns-default-kfj8k\" (UID: \"34ce85ee-5f93-46ea-a866-72bb238285ff\") " pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.550484 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/34ce85ee-5f93-46ea-a866-72bb238285ff-tmp-dir\") pod \"dns-default-kfj8k\" (UID: \"34ce85ee-5f93-46ea-a866-72bb238285ff\") " pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.559433 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34ce85ee-5f93-46ea-a866-72bb238285ff-metrics-tls\") pod \"dns-default-kfj8k\" (UID: \"34ce85ee-5f93-46ea-a866-72bb238285ff\") " pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.562227 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.585269 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lnct\" (UniqueName: \"kubernetes.io/projected/34ce85ee-5f93-46ea-a866-72bb238285ff-kube-api-access-8lnct\") pod \"dns-default-kfj8k\" (UID: \"34ce85ee-5f93-46ea-a866-72bb238285ff\") " pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.589612 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.593723 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-z495l"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.604436 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5f78n" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.605658 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-rqxjt"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.606070 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-b2v58" podStartSLOduration=97.606051647 podStartE2EDuration="1m37.606051647s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:53.757674597 +0000 UTC m=+116.682956854" watchObservedRunningTime="2025-12-09 14:13:58.606051647 +0000 UTC m=+121.531333894" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.638144 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.640371 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z9d5g"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.644147 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.647299 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.649781 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.649979 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gc4pj\" (UniqueName: \"kubernetes.io/projected/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-kube-api-access-gc4pj\") pod \"cni-sysctl-allowlist-ds-7j5wv\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.650028 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-ready\") pod \"cni-sysctl-allowlist-ds-7j5wv\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.650058 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7j5wv\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.650109 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7j5wv\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.651446 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm"] Dec 09 14:13:58 crc kubenswrapper[5173]: E1209 14:13:58.651661 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:59.151636375 +0000 UTC m=+122.076918672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.651815 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-7j5wv\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.651906 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-7j5wv\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.652115 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-ready\") pod \"cni-sysctl-allowlist-ds-7j5wv\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.653287 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-j76wj"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.654144 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.656810 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.658920 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.658968 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.675108 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-zmsp9"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.688852 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kfj8k" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.711733 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc4pj\" (UniqueName: \"kubernetes.io/projected/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-kube-api-access-gc4pj\") pod \"cni-sysctl-allowlist-ds-7j5wv\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.731718 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.792895 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:58 crc kubenswrapper[5173]: E1209 14:13:58.794501 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:59.294482371 +0000 UTC m=+122.219764638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.796496 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.832194 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.869758 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" podStartSLOduration=97.869733333 podStartE2EDuration="1m37.869733333s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:57.827869247 +0000 UTC m=+120.753151504" watchObservedRunningTime="2025-12-09 14:13:58.869733333 +0000 UTC m=+121.795015580" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.894157 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:58 crc kubenswrapper[5173]: E1209 14:13:58.894268 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:59.394244166 +0000 UTC m=+122.319526423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.894513 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:58 crc kubenswrapper[5173]: E1209 14:13:58.894910 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:59.394892056 +0000 UTC m=+122.320174363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.921064 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.951926 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-twcnj"] Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.956304 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-dzzb8" podStartSLOduration=97.956290948 podStartE2EDuration="1m37.956290948s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:58.593494916 +0000 UTC m=+121.518777193" watchObservedRunningTime="2025-12-09 14:13:58.956290948 +0000 UTC m=+121.881573195" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.961808 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podStartSLOduration=96.961787029 podStartE2EDuration="1m36.961787029s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:58.614175 +0000 UTC m=+121.539457277" watchObservedRunningTime="2025-12-09 14:13:58.961787029 +0000 UTC m=+121.887069276" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.971002 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lbnx5" podStartSLOduration=97.970989704 podStartE2EDuration="1m37.970989704s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:58.675719135 +0000 UTC m=+121.601001382" watchObservedRunningTime="2025-12-09 14:13:58.970989704 +0000 UTC m=+121.896271951" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.973468 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-tg55c" podStartSLOduration=96.973459711 podStartE2EDuration="1m36.973459711s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:58.726526807 +0000 UTC m=+121.651809064" watchObservedRunningTime="2025-12-09 14:13:58.973459711 +0000 UTC m=+121.898741958" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.977892 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-h5hkr" podStartSLOduration=96.977868929 podStartE2EDuration="1m36.977868929s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:58.791961293 +0000 UTC m=+121.717243550" watchObservedRunningTime="2025-12-09 14:13:58.977868929 +0000 UTC m=+121.903151176" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.984552 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" podStartSLOduration=96.984532206 podStartE2EDuration="1m36.984532206s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:58.871759407 +0000 UTC m=+121.797041664" watchObservedRunningTime="2025-12-09 14:13:58.984532206 +0000 UTC m=+121.909814463" Dec 09 14:13:58 crc kubenswrapper[5173]: I1209 14:13:58.995076 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:58 crc kubenswrapper[5173]: E1209 14:13:58.995393 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:59.495338683 +0000 UTC m=+122.420620940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.090746 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" event={"ID":"d171fe05-fe49-46fb-9407-bdc1f9272d4b","Type":"ContainerStarted","Data":"7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917"} Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.092199 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" event={"ID":"84c3a797-e34a-463b-b598-7b75849c651b","Type":"ContainerStarted","Data":"ab80dcb8087f2b52129f457949121b8189faefbdc921e3f3da1029c906a02304"} Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.094620 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" event={"ID":"ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7","Type":"ContainerStarted","Data":"48e33221bc939a06e602262e55f7aa23a8bf86db1e242c9b609914829498b5e0"} Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.095759 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" event={"ID":"b00790a6-0331-44bd-9ddb-10d0598d5d74","Type":"ContainerStarted","Data":"aadcca17c7cdce8204267d7d76d23cf1c9d8026d14a7bdca4a676df176796a6c"} Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.096803 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:59 crc kubenswrapper[5173]: E1209 14:13:59.097167 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:59.597150291 +0000 UTC m=+122.522432538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.102517 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-twcnj" event={"ID":"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d","Type":"ContainerStarted","Data":"5dedab86bd1b1c4b9ea005e8ced5ea867eb7407b5b894e3f749da081942a9e06"} Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.105647 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" event={"ID":"f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58","Type":"ContainerStarted","Data":"fd35efa225403c001a16fb9eddc00f99c12573db5743e1dec82f044a5d65839b"} Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.108378 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4nhj5" event={"ID":"54ba16c7-dd59-4faa-9932-7998a5377969","Type":"ContainerStarted","Data":"75ef8a8e8ea21f07eea346e363b87a4223d50e6d142791acd46346aa18049bfe"} Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.114943 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-j76wj" event={"ID":"7ab63887-5fdd-419d-a3cc-af0c227e114a","Type":"ContainerStarted","Data":"53483982a41f65dee6b242bf7e38fdcb789697a1db1cc0d08fc02733f696ac8f"} Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.198063 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:59 crc kubenswrapper[5173]: E1209 14:13:59.198683 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:59.698663321 +0000 UTC m=+122.623945578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.207766 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5f78n"] Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.300066 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:59 crc kubenswrapper[5173]: E1209 14:13:59.300437 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:13:59.800422038 +0000 UTC m=+122.725704285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.314861 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.314902 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.315056 5173 patch_prober.go:28] interesting pod/downloads-747b44746d-zhlr7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.315114 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zhlr7" podUID="6794662c-7933-4e08-870f-c44892aef039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.315296 5173 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-d66gc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.315368 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" podUID="ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.315481 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.316868 5173 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-b8zmj container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.316884 5173 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-z9d5g container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.316898 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" podUID="f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.316911 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" podUID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.340973 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" podStartSLOduration=97.340956359 podStartE2EDuration="1m37.340956359s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:59.340235087 +0000 UTC m=+122.265517344" watchObservedRunningTime="2025-12-09 14:13:59.340956359 +0000 UTC m=+122.266238606" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.373179 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" podStartSLOduration=97.373155392 podStartE2EDuration="1m37.373155392s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:59.372702678 +0000 UTC m=+122.297984925" watchObservedRunningTime="2025-12-09 14:13:59.373155392 +0000 UTC m=+122.298437649" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.406913 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.408575 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-j76wj" podStartSLOduration=97.408552843 podStartE2EDuration="1m37.408552843s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:59.406222551 +0000 UTC m=+122.331504808" watchObservedRunningTime="2025-12-09 14:13:59.408552843 +0000 UTC m=+122.333835090" Dec 09 14:13:59 crc kubenswrapper[5173]: E1209 14:13:59.408675 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:13:59.908658667 +0000 UTC m=+122.833940904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.421444 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.430821 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:13:59 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:13:59 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:13:59 crc kubenswrapper[5173]: healthz check failed Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.430940 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.449219 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" podStartSLOduration=97.449200408 podStartE2EDuration="1m37.449200408s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:59.447204146 +0000 UTC m=+122.372486403" watchObservedRunningTime="2025-12-09 14:13:59.449200408 +0000 UTC m=+122.374482655" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.511823 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:59 crc kubenswrapper[5173]: E1209 14:13:59.512635 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.012619273 +0000 UTC m=+122.937901530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.544738 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-z495l" podStartSLOduration=98.544722161 podStartE2EDuration="1m38.544722161s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:13:59.506954966 +0000 UTC m=+122.432237223" watchObservedRunningTime="2025-12-09 14:13:59.544722161 +0000 UTC m=+122.470004408" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.544911 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kfj8k"] Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.612990 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:59 crc kubenswrapper[5173]: E1209 14:13:59.613345 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.113287256 +0000 UTC m=+123.038569513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.614872 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:59 crc kubenswrapper[5173]: E1209 14:13:59.622190 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.122168362 +0000 UTC m=+123.047450609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.687886 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54240: no serving certificate available for the kubelet" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.718548 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:59 crc kubenswrapper[5173]: E1209 14:13:59.719100 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.219059427 +0000 UTC m=+123.144341674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.719794 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54254: no serving certificate available for the kubelet" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.746240 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54268: no serving certificate available for the kubelet" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.803372 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54278: no serving certificate available for the kubelet" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.820609 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:59 crc kubenswrapper[5173]: E1209 14:13:59.821043 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.321024671 +0000 UTC m=+123.246306918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.860475 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54290: no serving certificate available for the kubelet" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.922113 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:13:59 crc kubenswrapper[5173]: E1209 14:13:59.922497 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.422475028 +0000 UTC m=+123.347757275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.922538 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:13:59 crc kubenswrapper[5173]: E1209 14:13:59.922863 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.42285401 +0000 UTC m=+123.348136257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.965996 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-pkw8g" Dec 09 14:13:59 crc kubenswrapper[5173]: I1209 14:13:59.969458 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54292: no serving certificate available for the kubelet" Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.023780 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.024159 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.524139462 +0000 UTC m=+123.449421709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.125740 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.126083 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.626069555 +0000 UTC m=+123.551351802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.138581 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" event={"ID":"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54","Type":"ContainerStarted","Data":"9f5b1b1248fe758237429cd7396228391c3b52fa2305dfff9393cf993652961a"} Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.139499 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5f78n" event={"ID":"46276f4a-6b89-4791-b0a5-820978009c5e","Type":"ContainerStarted","Data":"765e92491c014e2a2a7d927a1e5b862c46f040d65a09730a6e6c58496481ec6b"} Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.140567 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" event={"ID":"b4546d4c-2456-4baf-98ca-9a1ca067bb14","Type":"ContainerStarted","Data":"5a8589eb0c06b3865eeec157c73a95f66bcea69e91845173a28a8d57b39d6f2c"} Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.142678 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" event={"ID":"f9f157b4-58c8-4daf-81bc-87cd621d3d55","Type":"ContainerStarted","Data":"9063cbbf08aa96cc1ac6463fa7ee6ad99e87faaebc923213ac3b1d69ae7d9b21"} Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.145494 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" event={"ID":"d2105b69-54d7-4854-ba11-9108ad09016d","Type":"ContainerStarted","Data":"667c0a2a95be14e4ef7d6273572b3b4513e2bde35d0b98301bcdb5aa3e14df59"} Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.146881 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" event={"ID":"b32aad18-fc40-4128-96a6-b4d1b3de9cb5","Type":"ContainerStarted","Data":"1cdcb918c073efa3c21329741077f87dbaf13781ba612dcd254ff312c9e3642e"} Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.148058 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-zmsp9" event={"ID":"f2962534-956e-497a-89af-1b5d39a61c84","Type":"ContainerStarted","Data":"04991580fb457855bf1a5cdb680938601bfb3898c3b8da181cbb885faf2e08d2"} Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.149690 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kfj8k" event={"ID":"34ce85ee-5f93-46ea-a866-72bb238285ff","Type":"ContainerStarted","Data":"a1bd3d24a0c2115a0db92008086cddf18d8717632f5d8f46fe02bde88590d0f7"} Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.151988 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" event={"ID":"f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966","Type":"ContainerStarted","Data":"fd60be6fe981b2dd3a5de456fe54f51434cc882a713680fbea6bfe7cc0c10890"} Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.168075 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54298: no serving certificate available for the kubelet" Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.226578 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.226744 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.726711018 +0000 UTC m=+123.651993265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.227145 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.227503 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.727489881 +0000 UTC m=+123.652772128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.236587 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.327957 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.328462 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.828441943 +0000 UTC m=+123.753724190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.425132 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:00 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:00 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:00 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.425199 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.429318 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.429701 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:00.929659624 +0000 UTC m=+123.854941871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.506045 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54310: no serving certificate available for the kubelet" Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.530441 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.530579 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.030561234 +0000 UTC m=+123.955843481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.530753 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.531036 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.031027279 +0000 UTC m=+123.956309526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.632511 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.632876 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.132843488 +0000 UTC m=+124.058125745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.734600 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.735047 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.235028358 +0000 UTC m=+124.160310605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.836260 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.836423 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.336391783 +0000 UTC m=+124.261674040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.837021 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.837365 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.337332442 +0000 UTC m=+124.262614759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:00 crc kubenswrapper[5173]: I1209 14:14:00.938000 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:00 crc kubenswrapper[5173]: E1209 14:14:00.938261 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.438244563 +0000 UTC m=+124.363526810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.040026 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.040441 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.540425073 +0000 UTC m=+124.465707320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.141539 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.141720 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.641692875 +0000 UTC m=+124.566975122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.142042 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.142409 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.642396857 +0000 UTC m=+124.567679104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.167020 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54316: no serving certificate available for the kubelet" Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.243563 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.243963 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.743941377 +0000 UTC m=+124.669223624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.307437 5173 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-d66gc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.307499 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" podUID="ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.307823 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.308604 5173 patch_prober.go:28] interesting pod/console-operator-67c89758df-z495l container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.308644 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-z495l" podUID="5e8f532f-b948-4468-9397-7318c60c6fa8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.345736 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.346209 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.8461888 +0000 UTC m=+124.771471077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.425936 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:01 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:01 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:01 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.426281 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.446796 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.447843 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:01.947824013 +0000 UTC m=+124.873106270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.548987 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.549436 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.049417495 +0000 UTC m=+124.974699742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.649903 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.650049 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.150029786 +0000 UTC m=+125.075312023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.650241 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.650520 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.150513051 +0000 UTC m=+125.075795288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.751703 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.751947 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.251912127 +0000 UTC m=+125.177194374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.853530 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.854192 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.354162169 +0000 UTC m=+125.279444626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.955300 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.955535 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.455506644 +0000 UTC m=+125.380788891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:01 crc kubenswrapper[5173]: I1209 14:14:01.955996 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:01 crc kubenswrapper[5173]: E1209 14:14:01.956388 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.456375781 +0000 UTC m=+125.381658068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.057432 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.057533 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.557516789 +0000 UTC m=+125.482799026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.057775 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.058088 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.558060576 +0000 UTC m=+125.483342823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.158617 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.159129 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.659021179 +0000 UTC m=+125.584303426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.159711 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.160090 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.660070631 +0000 UTC m=+125.585352878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.180872 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" event={"ID":"59e9e420-971a-4d09-80f7-1039326724b8","Type":"ContainerStarted","Data":"ac8403ec00991ed286fdab4105261825f897f414dd239983c39078a420b25447"} Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.188732 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" event={"ID":"0917873e-8059-49a3-aec4-f2b5152fc356","Type":"ContainerStarted","Data":"a11fc61cc59fe1dddb08cd924b6742de1a83c71046407e4db8b3cd7159261c36"} Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.196033 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" event={"ID":"7ff9b667-97da-48d5-85b6-7c02806cc6c6","Type":"ContainerStarted","Data":"119754f59dca4f91672c6c38687781bc0848749d94b56d3d87ae836c142b2279"} Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.197927 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" event={"ID":"b2ab9ef6-9c83-482d-9ea5-148c66ca62bd","Type":"ContainerStarted","Data":"02f6b5070a0cc901a47e9e9c3ca51b1c3ba909877cb0cd1e175990e7798e0ca6"} Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.261328 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.261519 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.761485957 +0000 UTC m=+125.686768214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.261703 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.262112 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.762098976 +0000 UTC m=+125.687381223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.362834 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.363096 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.863050768 +0000 UTC m=+125.788333015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.363443 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.363973 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.863947996 +0000 UTC m=+125.789230413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.423725 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:02 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:02 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:02 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.423829 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.464884 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.465122 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:02.965096634 +0000 UTC m=+125.890378881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.485574 5173 ???:1] "http: TLS handshake error from 192.168.126.11:45798: no serving certificate available for the kubelet" Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.566399 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.566903 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.066883362 +0000 UTC m=+125.992165609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.667682 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.668006 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.167960938 +0000 UTC m=+126.093243185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.769411 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.769829 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.269813458 +0000 UTC m=+126.195095705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.870493 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.870682 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.370654146 +0000 UTC m=+126.295936393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.870964 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.871592 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.371566385 +0000 UTC m=+126.296848632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.972207 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.972460 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.472424044 +0000 UTC m=+126.397706301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:02 crc kubenswrapper[5173]: I1209 14:14:02.972910 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:02 crc kubenswrapper[5173]: E1209 14:14:02.973229 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.473215269 +0000 UTC m=+126.398497516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.074305 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.074540 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.574510742 +0000 UTC m=+126.499792989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.152114 5173 patch_prober.go:28] interesting pod/console-operator-67c89758df-z495l container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.152186 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-67c89758df-z495l" podUID="5e8f532f-b948-4468-9397-7318c60c6fa8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.176322 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.176710 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.676694692 +0000 UTC m=+126.601976939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.240865 5173 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-b8zmj container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.240916 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" podUID="f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.241133 5173 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-z9d5g container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.241165 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" podUID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.265622 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-wznw7" podStartSLOduration=101.265603039 podStartE2EDuration="1m41.265603039s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:03.263828394 +0000 UTC m=+126.189110661" watchObservedRunningTime="2025-12-09 14:14:03.265603039 +0000 UTC m=+126.190885286" Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.277022 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.277244 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.77721086 +0000 UTC m=+126.702493107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.277692 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.278066 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.778049356 +0000 UTC m=+126.703331593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.306271 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-rbkgm" podStartSLOduration=101.306251474 podStartE2EDuration="1m41.306251474s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:03.305046936 +0000 UTC m=+126.230329203" watchObservedRunningTime="2025-12-09 14:14:03.306251474 +0000 UTC m=+126.231533721" Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.379744 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.381180 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.881144385 +0000 UTC m=+126.806426632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.423140 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.424666 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:03 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:03 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:03 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.424717 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.485193 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.485553 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:03.985541294 +0000 UTC m=+126.910823541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.545705 5173 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-z9d5g container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.545765 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" podUID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.586018 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.586226 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.086187147 +0000 UTC m=+127.011469394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.651673 5173 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-b8zmj container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.651729 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" podUID="f56c2c3c-9cd7-4ef2-9fa2-7fae10566c58" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.687869 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.688174 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.18815852 +0000 UTC m=+127.113440767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.703024 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.789176 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.789964 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.289919468 +0000 UTC m=+127.215201835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.790240 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.790792 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.290763614 +0000 UTC m=+127.216045871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.892503 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.892729 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.392691396 +0000 UTC m=+127.317973653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.893223 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.893621 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.393608505 +0000 UTC m=+127.318890942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.995180 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.995324 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.49530043 +0000 UTC m=+127.420582687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:03 crc kubenswrapper[5173]: I1209 14:14:03.995560 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:03 crc kubenswrapper[5173]: E1209 14:14:03.995923 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.495911868 +0000 UTC m=+127.421194115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.097364 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.097452 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.597429528 +0000 UTC m=+127.522711775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.097649 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.097944 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.597934184 +0000 UTC m=+127.523216431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.199230 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.199388 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.699370101 +0000 UTC m=+127.624652338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.199452 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.199760 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.699751833 +0000 UTC m=+127.625034080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.300234 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.300407 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.800388566 +0000 UTC m=+127.725670813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.300534 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.300813 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.800805219 +0000 UTC m=+127.726087466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.401670 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.402042 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:04.902025228 +0000 UTC m=+127.827307475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.430981 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:04 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:04 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:04 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.431068 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.432402 5173 scope.go:117] "RemoveContainer" containerID="c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d" Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.458780 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-5fld4" podStartSLOduration=102.458762784 podStartE2EDuration="1m42.458762784s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:04.455317557 +0000 UTC m=+127.380599804" watchObservedRunningTime="2025-12-09 14:14:04.458762784 +0000 UTC m=+127.384045031" Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.505661 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.506001 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:05.005988824 +0000 UTC m=+127.931271071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.510136 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" podStartSLOduration=102.510104753 podStartE2EDuration="1m42.510104753s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:04.493930819 +0000 UTC m=+127.419213076" watchObservedRunningTime="2025-12-09 14:14:04.510104753 +0000 UTC m=+127.435386990" Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.512122 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" podStartSLOduration=102.512112855 podStartE2EDuration="1m42.512112855s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:04.509330208 +0000 UTC m=+127.434612465" watchObservedRunningTime="2025-12-09 14:14:04.512112855 +0000 UTC m=+127.437395102" Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.525946 5173 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-d66gc container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.526029 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" podUID="ff6c1ec3-b9f2-4b18-ad51-a8e943ae96e7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.607476 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.607691 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:05.107659859 +0000 UTC m=+128.032942106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.607792 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.608343 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:05.10833179 +0000 UTC m=+128.033614137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.709132 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.709482 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:05.209410196 +0000 UTC m=+128.134692463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.811322 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.812320 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:05.312294488 +0000 UTC m=+128.237576735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:04 crc kubenswrapper[5173]: I1209 14:14:04.912916 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:04 crc kubenswrapper[5173]: E1209 14:14:04.913280 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:05.41324571 +0000 UTC m=+128.338527947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.014917 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.015382 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:05.515346297 +0000 UTC m=+128.440628534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.090259 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7" event={"ID":"bd366c79-af35-434f-9179-c5ecf3974dd8","Type":"ContainerStarted","Data":"048e0be44726236efb32c485ffb8b709e9a34b5844ea5c41812815fe6b574849"} Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.090422 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.095706 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.098489 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.098554 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.103052 5173 ???:1] "http: TLS handshake error from 192.168.126.11:45800: no serving certificate available for the kubelet" Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.103233 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.116463 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.116606 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:05.616580998 +0000 UTC m=+128.541863245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.116841 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.117266 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:05.61725594 +0000 UTC m=+128.542538187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.217562 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.217749 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f13bdbf-8f3c-425b-a709-d27afd43ba8b-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1f13bdbf-8f3c-425b-a709-d27afd43ba8b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.217783 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f13bdbf-8f3c-425b-a709-d27afd43ba8b-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1f13bdbf-8f3c-425b-a709-d27afd43ba8b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.218506 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:05.71848448 +0000 UTC m=+128.643766727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.320452 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f13bdbf-8f3c-425b-a709-d27afd43ba8b-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1f13bdbf-8f3c-425b-a709-d27afd43ba8b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.320713 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f13bdbf-8f3c-425b-a709-d27afd43ba8b-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1f13bdbf-8f3c-425b-a709-d27afd43ba8b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.320842 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f13bdbf-8f3c-425b-a709-d27afd43ba8b-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1f13bdbf-8f3c-425b-a709-d27afd43ba8b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.320949 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.321424 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:05.821406093 +0000 UTC m=+128.746688340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.346930 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f13bdbf-8f3c-425b-a709-d27afd43ba8b-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1f13bdbf-8f3c-425b-a709-d27afd43ba8b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.410559 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.421961 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.422394 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:05.922375386 +0000 UTC m=+128.847657633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.425761 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:05 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:05 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:05 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.425853 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.541806 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.542630 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.042617239 +0000 UTC m=+128.967899486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.642897 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.643279 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.14325303 +0000 UTC m=+129.068535277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.643668 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.644018 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.144010014 +0000 UTC m=+129.069292261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.744426 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.744624 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.244590054 +0000 UTC m=+129.169872301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.744836 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.745325 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.245312617 +0000 UTC m=+129.170594874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.845776 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.845874 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.345855626 +0000 UTC m=+129.271137873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.846275 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.846545 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.346537717 +0000 UTC m=+129.271819964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.874596 5173 patch_prober.go:28] interesting pod/apiserver-8596bd845d-rxvxv container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.16:8443/livez\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.874671 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" podUID="b2ab9ef6-9c83-482d-9ea5-148c66ca62bd" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.16:8443/livez\": dial tcp 10.217.0.16:8443: connect: connection refused" Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.947725 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.947942 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.447905792 +0000 UTC m=+129.373188039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:05 crc kubenswrapper[5173]: I1209 14:14:05.948493 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:05 crc kubenswrapper[5173]: E1209 14:14:05.948776 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.448768769 +0000 UTC m=+129.374051016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.049557 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.049905 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.549879256 +0000 UTC m=+129.475161503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.050148 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.050529 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.550520926 +0000 UTC m=+129.475803173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.151722 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.151880 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.65185754 +0000 UTC m=+129.577139777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.152395 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.152791 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.652779489 +0000 UTC m=+129.578061736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.167839 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.167881 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.167969 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-z495l" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.168070 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" event={"ID":"b00790a6-0331-44bd-9ddb-10d0598d5d74","Type":"ContainerStarted","Data":"b377f74e7027b15a87addc03db0cd4e638cade0cb6bf9e874faec6b23a7d45c0"} Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.168210 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.168231 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.168281 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.170408 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.173057 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.177195 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-rqxjt" event={"ID":"99788f65-7403-4cb0-91bb-f318172f7171","Type":"ContainerStarted","Data":"36b252ae9bccea3e7fefedd420938b6df53f0ecc8d62e18ffeb53bb00c056bd2"} Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.223134 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1f13bdbf-8f3c-425b-a709-d27afd43ba8b","Type":"ContainerStarted","Data":"c48831c9285aeacb730c4053f56acf89685b5b90c8121eadda9243ac67c79a16"} Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.253555 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.253760 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70a69462-5dd0-4aa2-88ab-6a3d43606d7e-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"70a69462-5dd0-4aa2-88ab-6a3d43606d7e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.253804 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70a69462-5dd0-4aa2-88ab-6a3d43606d7e-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"70a69462-5dd0-4aa2-88ab-6a3d43606d7e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.253942 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.753923807 +0000 UTC m=+129.679206054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.355371 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70a69462-5dd0-4aa2-88ab-6a3d43606d7e-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"70a69462-5dd0-4aa2-88ab-6a3d43606d7e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.355455 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.355485 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70a69462-5dd0-4aa2-88ab-6a3d43606d7e-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"70a69462-5dd0-4aa2-88ab-6a3d43606d7e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.355796 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70a69462-5dd0-4aa2-88ab-6a3d43606d7e-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"70a69462-5dd0-4aa2-88ab-6a3d43606d7e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.356002 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.855992383 +0000 UTC m=+129.781274630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.378791 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70a69462-5dd0-4aa2-88ab-6a3d43606d7e-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"70a69462-5dd0-4aa2-88ab-6a3d43606d7e\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.424144 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:06 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:06 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:06 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.424222 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.457213 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.457441 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.95741173 +0000 UTC m=+129.882693987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.457705 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.458021 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:06.958013019 +0000 UTC m=+129.883295266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.485247 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.559369 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.559667 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.059651502 +0000 UTC m=+129.984933749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.660389 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.661014 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.161000806 +0000 UTC m=+130.086283053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.764106 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.764596 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.26457424 +0000 UTC m=+130.189856487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.764628 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.802109 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-qjcxb" podStartSLOduration=104.802082097 podStartE2EDuration="1m44.802082097s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:06.801735947 +0000 UTC m=+129.727018204" watchObservedRunningTime="2025-12-09 14:14:06.802082097 +0000 UTC m=+129.727364374" Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.866252 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.866728 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.366712689 +0000 UTC m=+130.291994936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.968163 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.968326 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.468303681 +0000 UTC m=+130.393585928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:06 crc kubenswrapper[5173]: I1209 14:14:06.968500 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:06 crc kubenswrapper[5173]: E1209 14:14:06.968832 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.468822188 +0000 UTC m=+130.394104435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.070305 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.070562 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.570530732 +0000 UTC m=+130.495812989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.070846 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.071204 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.571189093 +0000 UTC m=+130.496471340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.075595 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-d66gc" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.077851 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.102262 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-rqxjt" podStartSLOduration=105.10224199 podStartE2EDuration="1m45.10224199s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:07.100043471 +0000 UTC m=+130.025325738" watchObservedRunningTime="2025-12-09 14:14:07.10224199 +0000 UTC m=+130.027524237" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.114122 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-b8zmj" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.172476 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.172665 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.672633111 +0000 UTC m=+130.597915358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.173209 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.175468 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.675454318 +0000 UTC m=+130.600736565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.179292 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" podStartSLOduration=105.179278497 podStartE2EDuration="1m45.179278497s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:07.14464594 +0000 UTC m=+130.069928207" watchObservedRunningTime="2025-12-09 14:14:07.179278497 +0000 UTC m=+130.104560744" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.221841 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-jf4h7" podStartSLOduration=105.221821971 podStartE2EDuration="1m45.221821971s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:07.217739295 +0000 UTC m=+130.143021552" watchObservedRunningTime="2025-12-09 14:14:07.221821971 +0000 UTC m=+130.147104218" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.258855 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"70a69462-5dd0-4aa2-88ab-6a3d43606d7e","Type":"ContainerStarted","Data":"59eb62e82e87c55587d5eaa8a674101f928385bb04275bb65e07184341b8973d"} Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.273943 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.274236 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.774218092 +0000 UTC m=+130.699500339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.280597 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" event={"ID":"84c3a797-e34a-463b-b598-7b75849c651b","Type":"ContainerStarted","Data":"1284ee4754f3119b3d057a4c7b930d27670f9a7fb07e6d604dd7ab6915abce91"} Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.280707 5173 patch_prober.go:28] interesting pod/downloads-747b44746d-zhlr7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.280776 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-zhlr7" podUID="6794662c-7933-4e08-870f-c44892aef039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.316929 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4nhj5" event={"ID":"54ba16c7-dd59-4faa-9932-7998a5377969","Type":"ContainerStarted","Data":"10c41c1756aa978423378f7b891700768d8306909dd2f4a162decfd62b38359e"} Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.344216 5173 patch_prober.go:28] interesting pod/console-64d44f6ddf-q5kgl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.344293 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-q5kgl" podUID="a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.381067 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.381390 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.881377637 +0000 UTC m=+130.806659884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.442724 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:07 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:07 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:07 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.442839 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.481764 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" podStartSLOduration=106.481744351 podStartE2EDuration="1m46.481744351s" podCreationTimestamp="2025-12-09 14:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:07.442808919 +0000 UTC m=+130.368091166" watchObservedRunningTime="2025-12-09 14:14:07.481744351 +0000 UTC m=+130.407026598" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.482367 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.483247 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:07.983184316 +0000 UTC m=+130.908466553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.484589 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-4nhj5" podStartSLOduration=23.484581579 podStartE2EDuration="23.484581579s" podCreationTimestamp="2025-12-09 14:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:07.481236995 +0000 UTC m=+130.406519242" watchObservedRunningTime="2025-12-09 14:14:07.484581579 +0000 UTC m=+130.409863826" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.583902 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.584418 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.084326984 +0000 UTC m=+131.009609271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.687478 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.687733 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.187681281 +0000 UTC m=+131.112963518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.688063 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.699076 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.199055185 +0000 UTC m=+131.124337432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.798059 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.798321 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.298277753 +0000 UTC m=+131.223560000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.798667 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.799212 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.299182771 +0000 UTC m=+131.224465188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.895994 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.896225 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.896241 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.900925 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.901263 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.401213686 +0000 UTC m=+131.326496133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.903290 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:07 crc kubenswrapper[5173]: E1209 14:14:07.903957 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.403939411 +0000 UTC m=+131.329221658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:07 crc kubenswrapper[5173]: I1209 14:14:07.907165 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-57k5h" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.005554 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.005688 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.505662227 +0000 UTC m=+131.430944474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.006092 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.007941 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.507928218 +0000 UTC m=+131.433210465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.108659 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.109186 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.609170489 +0000 UTC m=+131.534452736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.211737 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.212226 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.712203715 +0000 UTC m=+131.637486152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.313625 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.313739 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.813719535 +0000 UTC m=+131.739001782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.314016 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.314329 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.814319764 +0000 UTC m=+131.739602011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.344820 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5f78n" event={"ID":"46276f4a-6b89-4791-b0a5-820978009c5e","Type":"ContainerStarted","Data":"5e4dea619b6689045648f8e4458155289af16139dfa845e3f836fabcb9af14f0"} Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.363492 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5f78n" podStartSLOduration=24.363468553 podStartE2EDuration="24.363468553s" podCreationTimestamp="2025-12-09 14:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:08.360065698 +0000 UTC m=+131.285347955" watchObservedRunningTime="2025-12-09 14:14:08.363468553 +0000 UTC m=+131.288750800" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.366288 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.389375 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"795472ed85f8907273dd1d43c9bbbee761c69d5332067589f55aea901cd28a66"} Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.390445 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.405080 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" event={"ID":"59e9e420-971a-4d09-80f7-1039326724b8","Type":"ContainerStarted","Data":"08afd281d4905bc3b828d8fdb7315a9b34349f5cd53296349723c1179e7f934f"} Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.416364 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.417101 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:08.917072192 +0000 UTC m=+131.842354439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.426976 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=50.42695048 podStartE2EDuration="50.42695048s" podCreationTimestamp="2025-12-09 14:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:08.421422618 +0000 UTC m=+131.346704875" watchObservedRunningTime="2025-12-09 14:14:08.42695048 +0000 UTC m=+131.352232727" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.440525 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:08 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:08 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:08 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.440865 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.480813 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-zmsp9" event={"ID":"f2962534-956e-497a-89af-1b5d39a61c84","Type":"ContainerStarted","Data":"2c795d8b3d351962dafaa2f60f995c9fc4eb100c36c2f3e234ed1022b68fd44f"} Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.509522 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kfj8k" event={"ID":"34ce85ee-5f93-46ea-a866-72bb238285ff","Type":"ContainerStarted","Data":"b9d90dba61ba197f9bae30a655fecfb00cecbf06872d34bf60d16def55303d0c"} Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.509848 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-sk58k" podStartSLOduration=106.509827389 podStartE2EDuration="1m46.509827389s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:08.475793979 +0000 UTC m=+131.401076247" watchObservedRunningTime="2025-12-09 14:14:08.509827389 +0000 UTC m=+131.435109636" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.523537 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.524938 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:09.024916659 +0000 UTC m=+131.950198906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.530390 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" event={"ID":"f3fe3c75-5f1d-47f2-9b85-57e0ecbf8966","Type":"ContainerStarted","Data":"04e1500345f55eb901265cf284d56dc8449c2689a311252fc431cbe804aa1769"} Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.555690 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" event={"ID":"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54","Type":"ContainerStarted","Data":"8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1"} Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.555743 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.556171 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.564579 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-zmsp9" podStartSLOduration=106.564560553 podStartE2EDuration="1m46.564560553s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:08.508928601 +0000 UTC m=+131.434210858" watchObservedRunningTime="2025-12-09 14:14:08.564560553 +0000 UTC m=+131.489842810" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.606621 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7r65t" podStartSLOduration=106.606606121 podStartE2EDuration="1m46.606606121s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:08.56736716 +0000 UTC m=+131.492649447" watchObservedRunningTime="2025-12-09 14:14:08.606606121 +0000 UTC m=+131.531888358" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.628888 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.629673 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:09.129658468 +0000 UTC m=+132.054940715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.655475 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" podStartSLOduration=23.655451521 podStartE2EDuration="23.655451521s" podCreationTimestamp="2025-12-09 14:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:08.654616435 +0000 UTC m=+131.579898692" watchObservedRunningTime="2025-12-09 14:14:08.655451521 +0000 UTC m=+131.580733758" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.655775 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" podStartSLOduration=106.655770871 podStartE2EDuration="1m46.655770871s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:08.608017915 +0000 UTC m=+131.533300172" watchObservedRunningTime="2025-12-09 14:14:08.655770871 +0000 UTC m=+131.581053118" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.693199 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.730379 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.730793 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:09.230778956 +0000 UTC m=+132.156061203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.790388 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7j5wv"] Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.830968 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.831178 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:09.331147419 +0000 UTC m=+132.256429666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.831451 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.831729 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:09.331716657 +0000 UTC m=+132.256998904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:08 crc kubenswrapper[5173]: I1209 14:14:08.933062 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:08 crc kubenswrapper[5173]: E1209 14:14:08.933279 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:09.433252448 +0000 UTC m=+132.358534695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.034992 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.035276 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:09.535262892 +0000 UTC m=+132.460545139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.122622 5173 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod1f13bdbf_8f3c_425b_a709_d27afd43ba8b.slice/crio-bd32309a65efc5e1fad9b58fd407a07f025f217737257b39816b81ce28c8b399.scope\": RecentStats: unable to find data in memory cache]" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.136449 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.136692 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:09.636660128 +0000 UTC m=+132.561942375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.237720 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.238047 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:09.738029073 +0000 UTC m=+132.663311320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.307976 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mq8bj"] Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.315466 5173 patch_prober.go:28] interesting pod/downloads-747b44746d-zhlr7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.315535 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-zhlr7" podUID="6794662c-7933-4e08-870f-c44892aef039" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.331425 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mq8bj"] Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.331592 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.334497 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.338418 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.338720 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:09.838700747 +0000 UTC m=+132.763982994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.424017 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:09 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:09 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:09 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.424117 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.441520 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5qms\" (UniqueName: \"kubernetes.io/projected/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-kube-api-access-f5qms\") pod \"community-operators-mq8bj\" (UID: \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\") " pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.441588 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-catalog-content\") pod \"community-operators-mq8bj\" (UID: \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\") " pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.441635 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-utilities\") pod \"community-operators-mq8bj\" (UID: \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\") " pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.441688 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.442033 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:09.942019052 +0000 UTC m=+132.867301299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.509870 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-95c8n"] Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.542489 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.542678 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.042645154 +0000 UTC m=+132.967927401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.542980 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.543091 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f5qms\" (UniqueName: \"kubernetes.io/projected/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-kube-api-access-f5qms\") pod \"community-operators-mq8bj\" (UID: \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\") " pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.543161 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-catalog-content\") pod \"community-operators-mq8bj\" (UID: \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\") " pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.543267 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-utilities\") pod \"community-operators-mq8bj\" (UID: \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\") " pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.543490 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.04347198 +0000 UTC m=+132.968754267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.544094 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-catalog-content\") pod \"community-operators-mq8bj\" (UID: \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\") " pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.544180 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-utilities\") pod \"community-operators-mq8bj\" (UID: \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\") " pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.566619 5173 generic.go:358] "Generic (PLEG): container finished" podID="1f13bdbf-8f3c-425b-a709-d27afd43ba8b" containerID="bd32309a65efc5e1fad9b58fd407a07f025f217737257b39816b81ce28c8b399" exitCode=0 Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.569471 5173 generic.go:358] "Generic (PLEG): container finished" podID="70a69462-5dd0-4aa2-88ab-6a3d43606d7e" containerID="b04ce798e15ec2990f2367027ccea4edfa11331cac68969492a2035529c55761" exitCode=0 Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.580607 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1f13bdbf-8f3c-425b-a709-d27afd43ba8b","Type":"ContainerDied","Data":"bd32309a65efc5e1fad9b58fd407a07f025f217737257b39816b81ce28c8b399"} Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.580662 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"70a69462-5dd0-4aa2-88ab-6a3d43606d7e","Type":"ContainerDied","Data":"b04ce798e15ec2990f2367027ccea4edfa11331cac68969492a2035529c55761"} Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.580675 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kfj8k" event={"ID":"34ce85ee-5f93-46ea-a866-72bb238285ff","Type":"ContainerStarted","Data":"a3b601e10a39a2b2a14712b88db78b3a8c04fa2f8cc931cffc68a35e5707b73c"} Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.580694 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-95c8n"] Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.580905 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.582228 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5qms\" (UniqueName: \"kubernetes.io/projected/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-kube-api-access-f5qms\") pod \"community-operators-mq8bj\" (UID: \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\") " pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.584806 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.622579 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kfj8k" podStartSLOduration=25.622558962 podStartE2EDuration="25.622558962s" podCreationTimestamp="2025-12-09 14:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:09.620556079 +0000 UTC m=+132.545838336" watchObservedRunningTime="2025-12-09 14:14:09.622558962 +0000 UTC m=+132.547841209" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.644700 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.644987 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.144945658 +0000 UTC m=+133.070227905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.645106 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.645321 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn2kq\" (UniqueName: \"kubernetes.io/projected/8536effa-529d-4962-ab4e-0d8e1c3c4d93-kube-api-access-hn2kq\") pod \"certified-operators-95c8n\" (UID: \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\") " pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.645774 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8536effa-529d-4962-ab4e-0d8e1c3c4d93-utilities\") pod \"certified-operators-95c8n\" (UID: \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\") " pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.645987 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8536effa-529d-4962-ab4e-0d8e1c3c4d93-catalog-content\") pod \"certified-operators-95c8n\" (UID: \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\") " pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.646993 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.146971281 +0000 UTC m=+133.072253718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.705069 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.711526 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bjpqk"] Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.747122 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.247101497 +0000 UTC m=+133.172383744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.747023 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.747416 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hn2kq\" (UniqueName: \"kubernetes.io/projected/8536effa-529d-4962-ab4e-0d8e1c3c4d93-kube-api-access-hn2kq\") pod \"certified-operators-95c8n\" (UID: \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\") " pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.747502 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8536effa-529d-4962-ab4e-0d8e1c3c4d93-utilities\") pod \"certified-operators-95c8n\" (UID: \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\") " pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.747547 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8536effa-529d-4962-ab4e-0d8e1c3c4d93-catalog-content\") pod \"certified-operators-95c8n\" (UID: \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\") " pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.747612 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.747905 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.247895052 +0000 UTC m=+133.173177299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.748763 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8536effa-529d-4962-ab4e-0d8e1c3c4d93-utilities\") pod \"certified-operators-95c8n\" (UID: \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\") " pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.749039 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8536effa-529d-4962-ab4e-0d8e1c3c4d93-catalog-content\") pod \"certified-operators-95c8n\" (UID: \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\") " pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.783925 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn2kq\" (UniqueName: \"kubernetes.io/projected/8536effa-529d-4962-ab4e-0d8e1c3c4d93-kube-api-access-hn2kq\") pod \"certified-operators-95c8n\" (UID: \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\") " pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.805423 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bjpqk"] Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.805575 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.849037 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.849256 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.349222676 +0000 UTC m=+133.274504923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.849497 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.849885 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.349869196 +0000 UTC m=+133.275151443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.913640 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b624h"] Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.926881 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.938497 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b624h"] Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.938711 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.951105 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.952138 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gp4k\" (UniqueName: \"kubernetes.io/projected/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-kube-api-access-2gp4k\") pod \"community-operators-bjpqk\" (UID: \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\") " pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.952264 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-utilities\") pod \"community-operators-bjpqk\" (UID: \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\") " pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:14:09 crc kubenswrapper[5173]: I1209 14:14:09.952319 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-catalog-content\") pod \"community-operators-bjpqk\" (UID: \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\") " pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:14:09 crc kubenswrapper[5173]: E1209 14:14:09.952529 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.452500681 +0000 UTC m=+133.377782928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.039687 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mq8bj"] Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.056451 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2gp4k\" (UniqueName: \"kubernetes.io/projected/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-kube-api-access-2gp4k\") pod \"community-operators-bjpqk\" (UID: \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\") " pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.056509 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvdrf\" (UniqueName: \"kubernetes.io/projected/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-kube-api-access-xvdrf\") pod \"certified-operators-b624h\" (UID: \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\") " pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.056537 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.056585 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-catalog-content\") pod \"certified-operators-b624h\" (UID: \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\") " pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.056610 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-utilities\") pod \"community-operators-bjpqk\" (UID: \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\") " pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.056640 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-catalog-content\") pod \"community-operators-bjpqk\" (UID: \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\") " pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.056663 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-utilities\") pod \"certified-operators-b624h\" (UID: \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\") " pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.057510 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-catalog-content\") pod \"community-operators-bjpqk\" (UID: \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\") " pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:14:10 crc kubenswrapper[5173]: E1209 14:14:10.057570 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.557323262 +0000 UTC m=+133.482605689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.060798 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-utilities\") pod \"community-operators-bjpqk\" (UID: \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\") " pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.087473 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gp4k\" (UniqueName: \"kubernetes.io/projected/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-kube-api-access-2gp4k\") pod \"community-operators-bjpqk\" (UID: \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\") " pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.136262 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.157618 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.157949 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-catalog-content\") pod \"certified-operators-b624h\" (UID: \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\") " pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.157996 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-utilities\") pod \"certified-operators-b624h\" (UID: \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\") " pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.158073 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xvdrf\" (UniqueName: \"kubernetes.io/projected/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-kube-api-access-xvdrf\") pod \"certified-operators-b624h\" (UID: \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\") " pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:10 crc kubenswrapper[5173]: E1209 14:14:10.158493 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.658476451 +0000 UTC m=+133.583758698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.158831 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-catalog-content\") pod \"certified-operators-b624h\" (UID: \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\") " pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.158847 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-utilities\") pod \"certified-operators-b624h\" (UID: \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\") " pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.183488 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvdrf\" (UniqueName: \"kubernetes.io/projected/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-kube-api-access-xvdrf\") pod \"certified-operators-b624h\" (UID: \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\") " pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.200532 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-95c8n"] Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.260158 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:10 crc kubenswrapper[5173]: E1209 14:14:10.260557 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.760541278 +0000 UTC m=+133.685823525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:10 crc kubenswrapper[5173]: W1209 14:14:10.260688 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8536effa_529d_4962_ab4e_0d8e1c3c4d93.slice/crio-662057c62b9ba536d7fba8c8abf4c9c4e5454fd522e6921852b1b30e6c9a6c38 WatchSource:0}: Error finding container 662057c62b9ba536d7fba8c8abf4c9c4e5454fd522e6921852b1b30e6c9a6c38: Status 404 returned error can't find the container with id 662057c62b9ba536d7fba8c8abf4c9c4e5454fd522e6921852b1b30e6c9a6c38 Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.280151 5173 ???:1] "http: TLS handshake error from 192.168.126.11:45804: no serving certificate available for the kubelet" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.291159 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.361240 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:10 crc kubenswrapper[5173]: E1209 14:14:10.362503 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.86247619 +0000 UTC m=+133.787758437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.386058 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bjpqk"] Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.424595 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:10 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:10 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:10 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.424673 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.463775 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:10 crc kubenswrapper[5173]: E1209 14:14:10.464113 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:10.964100643 +0000 UTC m=+133.889382890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.498517 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b624h"] Dec 09 14:14:10 crc kubenswrapper[5173]: W1209 14:14:10.503475 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4723c0a4_6d37_4bcd_9189_4a9d1f6cfb67.slice/crio-c668af20b4709a77feaceb54cb54dd31413383c8759049d5535bb5c15c2a0ec0 WatchSource:0}: Error finding container c668af20b4709a77feaceb54cb54dd31413383c8759049d5535bb5c15c2a0ec0: Status 404 returned error can't find the container with id c668af20b4709a77feaceb54cb54dd31413383c8759049d5535bb5c15c2a0ec0 Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.564661 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:10 crc kubenswrapper[5173]: E1209 14:14:10.564811 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.064783947 +0000 UTC m=+133.990066194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.565273 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:10 crc kubenswrapper[5173]: E1209 14:14:10.565693 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.065682525 +0000 UTC m=+133.990964842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.586715 5173 generic.go:358] "Generic (PLEG): container finished" podID="a79afc8b-ca22-4e56-b7a9-d725b23e30ff" containerID="b888bd4aa823d00fb8ae9d954bd06f242d7dbf04912d08a7f07c3d48b38e6583" exitCode=0 Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.586940 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq8bj" event={"ID":"a79afc8b-ca22-4e56-b7a9-d725b23e30ff","Type":"ContainerDied","Data":"b888bd4aa823d00fb8ae9d954bd06f242d7dbf04912d08a7f07c3d48b38e6583"} Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.586967 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq8bj" event={"ID":"a79afc8b-ca22-4e56-b7a9-d725b23e30ff","Type":"ContainerStarted","Data":"0a65410c30d86ba57bfb9bcc892dc6be200e0ff08e6ad8838cc87e62dbd1048e"} Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.590467 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-twcnj" event={"ID":"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d","Type":"ContainerStarted","Data":"adf3129c6de8f0024fb36dbfa2f8169f41500c3710ff813dbf8359e0d440e498"} Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.592449 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95c8n" event={"ID":"8536effa-529d-4962-ab4e-0d8e1c3c4d93","Type":"ContainerStarted","Data":"662057c62b9ba536d7fba8c8abf4c9c4e5454fd522e6921852b1b30e6c9a6c38"} Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.594416 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b624h" event={"ID":"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67","Type":"ContainerStarted","Data":"c668af20b4709a77feaceb54cb54dd31413383c8759049d5535bb5c15c2a0ec0"} Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.598873 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpqk" event={"ID":"d4b50aa3-6227-4e8a-8dbd-e56b695472c1","Type":"ContainerStarted","Data":"3693fde0d88795c65cbeedf8dd9856f2e518a54d870ed0d0653bdc1b7689a58a"} Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.666612 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:10 crc kubenswrapper[5173]: E1209 14:14:10.668049 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.16802941 +0000 UTC m=+134.093311667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.768830 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:10 crc kubenswrapper[5173]: E1209 14:14:10.769374 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.269342183 +0000 UTC m=+134.194624430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.820540 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.870156 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:10 crc kubenswrapper[5173]: E1209 14:14:10.870534 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.370499762 +0000 UTC m=+134.295782009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.886865 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.946997 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.971527 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f13bdbf-8f3c-425b-a709-d27afd43ba8b-kube-api-access\") pod \"1f13bdbf-8f3c-425b-a709-d27afd43ba8b\" (UID: \"1f13bdbf-8f3c-425b-a709-d27afd43ba8b\") " Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.971691 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f13bdbf-8f3c-425b-a709-d27afd43ba8b-kubelet-dir\") pod \"1f13bdbf-8f3c-425b-a709-d27afd43ba8b\" (UID: \"1f13bdbf-8f3c-425b-a709-d27afd43ba8b\") " Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.971753 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f13bdbf-8f3c-425b-a709-d27afd43ba8b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1f13bdbf-8f3c-425b-a709-d27afd43ba8b" (UID: "1f13bdbf-8f3c-425b-a709-d27afd43ba8b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.971998 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.972104 5173 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f13bdbf-8f3c-425b-a709-d27afd43ba8b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:10 crc kubenswrapper[5173]: E1209 14:14:10.972381 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.472366202 +0000 UTC m=+134.397648449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:10 crc kubenswrapper[5173]: I1209 14:14:10.979458 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f13bdbf-8f3c-425b-a709-d27afd43ba8b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1f13bdbf-8f3c-425b-a709-d27afd43ba8b" (UID: "1f13bdbf-8f3c-425b-a709-d27afd43ba8b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.015957 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-kfj8k" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.016183 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" podUID="f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1" gracePeriod=30 Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.072961 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70a69462-5dd0-4aa2-88ab-6a3d43606d7e-kubelet-dir\") pod \"70a69462-5dd0-4aa2-88ab-6a3d43606d7e\" (UID: \"70a69462-5dd0-4aa2-88ab-6a3d43606d7e\") " Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.073089 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70a69462-5dd0-4aa2-88ab-6a3d43606d7e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "70a69462-5dd0-4aa2-88ab-6a3d43606d7e" (UID: "70a69462-5dd0-4aa2-88ab-6a3d43606d7e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.073147 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.073279 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.573259992 +0000 UTC m=+134.498542239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.073424 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70a69462-5dd0-4aa2-88ab-6a3d43606d7e-kube-api-access\") pod \"70a69462-5dd0-4aa2-88ab-6a3d43606d7e\" (UID: \"70a69462-5dd0-4aa2-88ab-6a3d43606d7e\") " Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.073799 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.074019 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f13bdbf-8f3c-425b-a709-d27afd43ba8b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.074046 5173 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70a69462-5dd0-4aa2-88ab-6a3d43606d7e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.074649 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.574632265 +0000 UTC m=+134.499914512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.079256 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a69462-5dd0-4aa2-88ab-6a3d43606d7e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "70a69462-5dd0-4aa2-88ab-6a3d43606d7e" (UID: "70a69462-5dd0-4aa2-88ab-6a3d43606d7e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.175043 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.175225 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.675198175 +0000 UTC m=+134.600480472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.175732 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.175856 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70a69462-5dd0-4aa2-88ab-6a3d43606d7e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.176113 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.676099804 +0000 UTC m=+134.601382051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.278008 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.278315 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.778260043 +0000 UTC m=+134.703542280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.310624 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-72sct"] Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.311400 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70a69462-5dd0-4aa2-88ab-6a3d43606d7e" containerName="pruner" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.311424 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a69462-5dd0-4aa2-88ab-6a3d43606d7e" containerName="pruner" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.311454 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f13bdbf-8f3c-425b-a709-d27afd43ba8b" containerName="pruner" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.311460 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f13bdbf-8f3c-425b-a709-d27afd43ba8b" containerName="pruner" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.311571 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="70a69462-5dd0-4aa2-88ab-6a3d43606d7e" containerName="pruner" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.311583 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f13bdbf-8f3c-425b-a709-d27afd43ba8b" containerName="pruner" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.379659 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.380014 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.88000205 +0000 UTC m=+134.805284297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.425470 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:11 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:11 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:11 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.425821 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.481187 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.481434 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:11.981409466 +0000 UTC m=+134.906691713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.583776 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.584643 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.084617288 +0000 UTC m=+135.009899535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.608996 5173 generic.go:358] "Generic (PLEG): container finished" podID="8536effa-529d-4962-ab4e-0d8e1c3c4d93" containerID="e753700835c5c0c431be571b87dc03786d70f6041f01d18a5b46bddd2fc8d2d6" exitCode=0 Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.685058 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.685330 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.185287341 +0000 UTC m=+135.110569588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.685625 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.686217 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.186190859 +0000 UTC m=+135.111473106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.787320 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.787488 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.287460471 +0000 UTC m=+135.212742718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.787676 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.788464 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.288417481 +0000 UTC m=+135.213699748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.888957 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.889170 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.389144246 +0000 UTC m=+135.314426493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.889419 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.890025 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.389977962 +0000 UTC m=+135.315260209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.991645 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.991810 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.49178869 +0000 UTC m=+135.417070937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:11 crc kubenswrapper[5173]: I1209 14:14:11.992047 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:11 crc kubenswrapper[5173]: E1209 14:14:11.992410 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.49239952 +0000 UTC m=+135.417681767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.093258 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.093541 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.593499596 +0000 UTC m=+135.518781853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.093888 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.094267 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.59425093 +0000 UTC m=+135.519533197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.194843 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.195020 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.694988665 +0000 UTC m=+135.620270932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.195277 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.195681 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.695663515 +0000 UTC m=+135.620945762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.296982 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.297273 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.797236247 +0000 UTC m=+135.722518514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.297478 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.297941 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.797919639 +0000 UTC m=+135.723201906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.398746 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.399068 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.899025945 +0000 UTC m=+135.824308232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.399457 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.399830 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:12.89981296 +0000 UTC m=+135.825095207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.423487 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:12 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:12 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:12 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.423630 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.496942 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-72sct"] Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.497571 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.497835 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.497847 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.501619 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.502273 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.502528 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.002485495 +0000 UTC m=+135.927767792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.508076 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.509278 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.009259546 +0000 UTC m=+135.934541793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.539565 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vhv4r"] Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.610903 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.611075 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.111048424 +0000 UTC m=+136.036330671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.611383 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae976069-cbe3-4195-8666-ec1e96e284e9-catalog-content\") pod \"redhat-marketplace-72sct\" (UID: \"ae976069-cbe3-4195-8666-ec1e96e284e9\") " pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.611463 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.611496 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae976069-cbe3-4195-8666-ec1e96e284e9-utilities\") pod \"redhat-marketplace-72sct\" (UID: \"ae976069-cbe3-4195-8666-ec1e96e284e9\") " pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.611637 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2jlf\" (UniqueName: \"kubernetes.io/projected/ae976069-cbe3-4195-8666-ec1e96e284e9-kube-api-access-h2jlf\") pod \"redhat-marketplace-72sct\" (UID: \"ae976069-cbe3-4195-8666-ec1e96e284e9\") " pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.611980 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.111961973 +0000 UTC m=+136.037244220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.713152 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.713540 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h2jlf\" (UniqueName: \"kubernetes.io/projected/ae976069-cbe3-4195-8666-ec1e96e284e9-kube-api-access-h2jlf\") pod \"redhat-marketplace-72sct\" (UID: \"ae976069-cbe3-4195-8666-ec1e96e284e9\") " pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.713604 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae976069-cbe3-4195-8666-ec1e96e284e9-catalog-content\") pod \"redhat-marketplace-72sct\" (UID: \"ae976069-cbe3-4195-8666-ec1e96e284e9\") " pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.713677 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae976069-cbe3-4195-8666-ec1e96e284e9-utilities\") pod \"redhat-marketplace-72sct\" (UID: \"ae976069-cbe3-4195-8666-ec1e96e284e9\") " pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.714117 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae976069-cbe3-4195-8666-ec1e96e284e9-utilities\") pod \"redhat-marketplace-72sct\" (UID: \"ae976069-cbe3-4195-8666-ec1e96e284e9\") " pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.714187 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.214171554 +0000 UTC m=+136.139453801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.715798 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae976069-cbe3-4195-8666-ec1e96e284e9-catalog-content\") pod \"redhat-marketplace-72sct\" (UID: \"ae976069-cbe3-4195-8666-ec1e96e284e9\") " pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.738444 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2jlf\" (UniqueName: \"kubernetes.io/projected/ae976069-cbe3-4195-8666-ec1e96e284e9-kube-api-access-h2jlf\") pod \"redhat-marketplace-72sct\" (UID: \"ae976069-cbe3-4195-8666-ec1e96e284e9\") " pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.814765 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.815045 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.315033843 +0000 UTC m=+136.240316090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.828604 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.916301 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.916512 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.41647693 +0000 UTC m=+136.341759187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:12 crc kubenswrapper[5173]: I1209 14:14:12.917386 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:12 crc kubenswrapper[5173]: E1209 14:14:12.917682 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.417668858 +0000 UTC m=+136.342951105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.009682 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhv4r"] Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.009925 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xmw7h"] Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.010175 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.017982 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.018194 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.518176786 +0000 UTC m=+136.443459033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.120930 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c76269-0d49-4517-be74-f6fe064135dd-utilities\") pod \"redhat-marketplace-vhv4r\" (UID: \"e9c76269-0d49-4517-be74-f6fe064135dd\") " pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.121147 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2ts2\" (UniqueName: \"kubernetes.io/projected/e9c76269-0d49-4517-be74-f6fe064135dd-kube-api-access-h2ts2\") pod \"redhat-marketplace-vhv4r\" (UID: \"e9c76269-0d49-4517-be74-f6fe064135dd\") " pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.121297 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.121435 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c76269-0d49-4517-be74-f6fe064135dd-catalog-content\") pod \"redhat-marketplace-vhv4r\" (UID: \"e9c76269-0d49-4517-be74-f6fe064135dd\") " pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.122070 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.622054759 +0000 UTC m=+136.547336996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.222379 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.222659 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.722620549 +0000 UTC m=+136.647902806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.223026 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c76269-0d49-4517-be74-f6fe064135dd-utilities\") pod \"redhat-marketplace-vhv4r\" (UID: \"e9c76269-0d49-4517-be74-f6fe064135dd\") " pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.223105 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h2ts2\" (UniqueName: \"kubernetes.io/projected/e9c76269-0d49-4517-be74-f6fe064135dd-kube-api-access-h2ts2\") pod \"redhat-marketplace-vhv4r\" (UID: \"e9c76269-0d49-4517-be74-f6fe064135dd\") " pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.223167 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.223226 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c76269-0d49-4517-be74-f6fe064135dd-catalog-content\") pod \"redhat-marketplace-vhv4r\" (UID: \"e9c76269-0d49-4517-be74-f6fe064135dd\") " pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.223521 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c76269-0d49-4517-be74-f6fe064135dd-utilities\") pod \"redhat-marketplace-vhv4r\" (UID: \"e9c76269-0d49-4517-be74-f6fe064135dd\") " pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.223967 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c76269-0d49-4517-be74-f6fe064135dd-catalog-content\") pod \"redhat-marketplace-vhv4r\" (UID: \"e9c76269-0d49-4517-be74-f6fe064135dd\") " pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.224428 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.724395884 +0000 UTC m=+136.649678311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.246595 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2ts2\" (UniqueName: \"kubernetes.io/projected/e9c76269-0d49-4517-be74-f6fe064135dd-kube-api-access-h2ts2\") pod \"redhat-marketplace-vhv4r\" (UID: \"e9c76269-0d49-4517-be74-f6fe064135dd\") " pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.324268 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.324506 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.824467598 +0000 UTC m=+136.749749875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.324956 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.325337 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.825322295 +0000 UTC m=+136.750604582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.344693 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.425532 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.425728 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.925701019 +0000 UTC m=+136.850983266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.426251 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.426278 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:13 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:13 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:13 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.426330 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.426629 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:13.926613048 +0000 UTC m=+136.851895295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.452541 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-rxvxv" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.452603 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"70a69462-5dd0-4aa2-88ab-6a3d43606d7e","Type":"ContainerDied","Data":"59eb62e82e87c55587d5eaa8a674101f928385bb04275bb65e07184341b8973d"} Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.452643 5173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59eb62e82e87c55587d5eaa8a674101f928385bb04275bb65e07184341b8973d" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.452665 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95c8n" event={"ID":"8536effa-529d-4962-ab4e-0d8e1c3c4d93","Type":"ContainerDied","Data":"e753700835c5c0c431be571b87dc03786d70f6041f01d18a5b46bddd2fc8d2d6"} Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.452684 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1f13bdbf-8f3c-425b-a709-d27afd43ba8b","Type":"ContainerDied","Data":"c48831c9285aeacb730c4053f56acf89685b5b90c8121eadda9243ac67c79a16"} Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.452697 5173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c48831c9285aeacb730c4053f56acf89685b5b90c8121eadda9243ac67c79a16" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.452802 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.452814 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b624h" event={"ID":"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67","Type":"ContainerStarted","Data":"3ab7e572641d83e5de2db6e28885fab6614346f3ca25f592485870c61d76e1ea"} Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.452837 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xmw7h"] Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.452859 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpqk" event={"ID":"d4b50aa3-6227-4e8a-8dbd-e56b695472c1","Type":"ContainerStarted","Data":"a99b2ffc961cb8e257be6ee55c2c62d5b4f422e6c5c79fc8bd4f001988be50f0"} Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.452873 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b7sjh"] Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.453049 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.458199 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.466465 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7sjh"] Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.466518 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-72sct"] Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.466705 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.526984 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.527316 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkj79\" (UniqueName: \"kubernetes.io/projected/07be13ae-949a-42e1-9366-afe32b5480f2-kube-api-access-vkj79\") pod \"redhat-operators-xmw7h\" (UID: \"07be13ae-949a-42e1-9366-afe32b5480f2\") " pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.527497 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07be13ae-949a-42e1-9366-afe32b5480f2-utilities\") pod \"redhat-operators-xmw7h\" (UID: \"07be13ae-949a-42e1-9366-afe32b5480f2\") " pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.527721 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:14.027694994 +0000 UTC m=+136.952977241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.527803 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07be13ae-949a-42e1-9366-afe32b5480f2-catalog-content\") pod \"redhat-operators-xmw7h\" (UID: \"07be13ae-949a-42e1-9366-afe32b5480f2\") " pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.631732 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72sct" event={"ID":"ae976069-cbe3-4195-8666-ec1e96e284e9","Type":"ContainerStarted","Data":"9bd79166387f38e1a18f1caeb0a42af2660a76a4aa4b0d358364631c9fa57b64"} Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.632917 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558ba319-3c10-46e3-a9e8-64e5b28db3ea-utilities\") pod \"redhat-operators-b7sjh\" (UID: \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\") " pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.632980 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558ba319-3c10-46e3-a9e8-64e5b28db3ea-catalog-content\") pod \"redhat-operators-b7sjh\" (UID: \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\") " pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.633017 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.633043 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2trx\" (UniqueName: \"kubernetes.io/projected/558ba319-3c10-46e3-a9e8-64e5b28db3ea-kube-api-access-k2trx\") pod \"redhat-operators-b7sjh\" (UID: \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\") " pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.633070 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vkj79\" (UniqueName: \"kubernetes.io/projected/07be13ae-949a-42e1-9366-afe32b5480f2-kube-api-access-vkj79\") pod \"redhat-operators-xmw7h\" (UID: \"07be13ae-949a-42e1-9366-afe32b5480f2\") " pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.633128 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07be13ae-949a-42e1-9366-afe32b5480f2-utilities\") pod \"redhat-operators-xmw7h\" (UID: \"07be13ae-949a-42e1-9366-afe32b5480f2\") " pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.633179 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07be13ae-949a-42e1-9366-afe32b5480f2-catalog-content\") pod \"redhat-operators-xmw7h\" (UID: \"07be13ae-949a-42e1-9366-afe32b5480f2\") " pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.633681 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:14.133660112 +0000 UTC m=+137.058942369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.633797 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07be13ae-949a-42e1-9366-afe32b5480f2-catalog-content\") pod \"redhat-operators-xmw7h\" (UID: \"07be13ae-949a-42e1-9366-afe32b5480f2\") " pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.635428 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07be13ae-949a-42e1-9366-afe32b5480f2-utilities\") pod \"redhat-operators-xmw7h\" (UID: \"07be13ae-949a-42e1-9366-afe32b5480f2\") " pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.641488 5173 generic.go:358] "Generic (PLEG): container finished" podID="b00790a6-0331-44bd-9ddb-10d0598d5d74" containerID="b377f74e7027b15a87addc03db0cd4e638cade0cb6bf9e874faec6b23a7d45c0" exitCode=0 Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.641590 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" event={"ID":"b00790a6-0331-44bd-9ddb-10d0598d5d74","Type":"ContainerDied","Data":"b377f74e7027b15a87addc03db0cd4e638cade0cb6bf9e874faec6b23a7d45c0"} Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.660842 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-twcnj" event={"ID":"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d","Type":"ContainerStarted","Data":"da06299febc1d49890a7243b68ec0d57e455ffc28b78fb837a6e46c607462fdb"} Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.665987 5173 generic.go:358] "Generic (PLEG): container finished" podID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" containerID="3ab7e572641d83e5de2db6e28885fab6614346f3ca25f592485870c61d76e1ea" exitCode=0 Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.666077 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b624h" event={"ID":"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67","Type":"ContainerDied","Data":"3ab7e572641d83e5de2db6e28885fab6614346f3ca25f592485870c61d76e1ea"} Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.682717 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkj79\" (UniqueName: \"kubernetes.io/projected/07be13ae-949a-42e1-9366-afe32b5480f2-kube-api-access-vkj79\") pod \"redhat-operators-xmw7h\" (UID: \"07be13ae-949a-42e1-9366-afe32b5480f2\") " pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.707572 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhv4r"] Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.711550 5173 generic.go:358] "Generic (PLEG): container finished" podID="d4b50aa3-6227-4e8a-8dbd-e56b695472c1" containerID="a99b2ffc961cb8e257be6ee55c2c62d5b4f422e6c5c79fc8bd4f001988be50f0" exitCode=0 Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.711836 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpqk" event={"ID":"d4b50aa3-6227-4e8a-8dbd-e56b695472c1","Type":"ContainerDied","Data":"a99b2ffc961cb8e257be6ee55c2c62d5b4f422e6c5c79fc8bd4f001988be50f0"} Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.734130 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.734298 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558ba319-3c10-46e3-a9e8-64e5b28db3ea-utilities\") pod \"redhat-operators-b7sjh\" (UID: \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\") " pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.734365 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558ba319-3c10-46e3-a9e8-64e5b28db3ea-catalog-content\") pod \"redhat-operators-b7sjh\" (UID: \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\") " pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.734402 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k2trx\" (UniqueName: \"kubernetes.io/projected/558ba319-3c10-46e3-a9e8-64e5b28db3ea-kube-api-access-k2trx\") pod \"redhat-operators-b7sjh\" (UID: \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\") " pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.734856 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:14.234834861 +0000 UTC m=+137.160117118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.735277 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558ba319-3c10-46e3-a9e8-64e5b28db3ea-utilities\") pod \"redhat-operators-b7sjh\" (UID: \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\") " pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.735563 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558ba319-3c10-46e3-a9e8-64e5b28db3ea-catalog-content\") pod \"redhat-operators-b7sjh\" (UID: \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\") " pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.761712 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2trx\" (UniqueName: \"kubernetes.io/projected/558ba319-3c10-46e3-a9e8-64e5b28db3ea-kube-api-access-k2trx\") pod \"redhat-operators-b7sjh\" (UID: \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\") " pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.782178 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.796893 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.836950 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.837474 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:14.337456975 +0000 UTC m=+137.262739222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.943440 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.943686 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:14.44364723 +0000 UTC m=+137.368929487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:13 crc kubenswrapper[5173]: I1209 14:14:13.944378 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:13 crc kubenswrapper[5173]: E1209 14:14:13.944830 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:14.444817736 +0000 UTC m=+137.370100153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.046421 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:14 crc kubenswrapper[5173]: E1209 14:14:14.046534 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:14.54649469 +0000 UTC m=+137.471776937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.047724 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:14 crc kubenswrapper[5173]: E1209 14:14:14.048086 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:14.54806452 +0000 UTC m=+137.473346817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.133226 5173 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.143287 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7sjh"] Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.148811 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:14 crc kubenswrapper[5173]: E1209 14:14:14.149422 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-09 14:14:14.649405754 +0000 UTC m=+137.574687991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.253755 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:14 crc kubenswrapper[5173]: E1209 14:14:14.254116 5173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-09 14:14:14.754100352 +0000 UTC m=+137.679382599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-tpkl8" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.271428 5173 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-09T14:14:14.133262641Z","UUID":"8fab302f-f4e1-4f23-a0f5-8deaae1408a4","Handler":null,"Name":"","Endpoint":""} Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.287754 5173 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.287805 5173 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.298005 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xmw7h"] Dec 09 14:14:14 crc kubenswrapper[5173]: W1209 14:14:14.317506 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07be13ae_949a_42e1_9366_afe32b5480f2.slice/crio-d05f78a8a403b8fecb379321c793c8aae2c2808b5e58de2aec66be001f4bc56c WatchSource:0}: Error finding container d05f78a8a403b8fecb379321c793c8aae2c2808b5e58de2aec66be001f4bc56c: Status 404 returned error can't find the container with id d05f78a8a403b8fecb379321c793c8aae2c2808b5e58de2aec66be001f4bc56c Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.354769 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.372480 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.423920 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:14 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:14 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:14 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.423976 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.432950 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.440887 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-sgppc" Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.456604 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.459980 5173 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.460028 5173 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.529210 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-tpkl8\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.594986 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.601390 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.727052 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmw7h" event={"ID":"07be13ae-949a-42e1-9366-afe32b5480f2","Type":"ContainerStarted","Data":"d05f78a8a403b8fecb379321c793c8aae2c2808b5e58de2aec66be001f4bc56c"} Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.732515 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-twcnj" event={"ID":"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d","Type":"ContainerStarted","Data":"7291a7e9b6f55db5b9817b521c6620477c95402778af7f2bfa7d209be2c2f93e"} Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.736337 5173 generic.go:358] "Generic (PLEG): container finished" podID="e9c76269-0d49-4517-be74-f6fe064135dd" containerID="03cff7e7f4ed2781991d810fbe846c645a8b4898bcaa620f83633bf64bde3b16" exitCode=0 Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.736731 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhv4r" event={"ID":"e9c76269-0d49-4517-be74-f6fe064135dd","Type":"ContainerDied","Data":"03cff7e7f4ed2781991d810fbe846c645a8b4898bcaa620f83633bf64bde3b16"} Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.736774 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhv4r" event={"ID":"e9c76269-0d49-4517-be74-f6fe064135dd","Type":"ContainerStarted","Data":"e17f67924b4da19ccc413b4ef22294355d8e6d0e63048f382c3ae32c17dc519f"} Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.746560 5173 generic.go:358] "Generic (PLEG): container finished" podID="ae976069-cbe3-4195-8666-ec1e96e284e9" containerID="f31f8f75bf829d426b46a72c4b8b191b6a9ab1d10bf4edc620f1cdca3648f4e5" exitCode=0 Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.746638 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72sct" event={"ID":"ae976069-cbe3-4195-8666-ec1e96e284e9","Type":"ContainerDied","Data":"f31f8f75bf829d426b46a72c4b8b191b6a9ab1d10bf4edc620f1cdca3648f4e5"} Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.750834 5173 generic.go:358] "Generic (PLEG): container finished" podID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" containerID="c3d5fb8e679e3e2df9c3ef96ba6f0cd27a26689a9d0ae8f1c837fc3e281a4e26" exitCode=0 Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.751686 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7sjh" event={"ID":"558ba319-3c10-46e3-a9e8-64e5b28db3ea","Type":"ContainerDied","Data":"c3d5fb8e679e3e2df9c3ef96ba6f0cd27a26689a9d0ae8f1c837fc3e281a4e26"} Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.751748 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7sjh" event={"ID":"558ba319-3c10-46e3-a9e8-64e5b28db3ea","Type":"ContainerStarted","Data":"4642d50fb978f1a53c8b7c0b6e0d08cfc263b6de74e4eac21d37ad9b962f0e5f"} Dec 09 14:14:14 crc kubenswrapper[5173]: I1209 14:14:14.860821 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tpkl8"] Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.127927 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.279455 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00790a6-0331-44bd-9ddb-10d0598d5d74-config-volume\") pod \"b00790a6-0331-44bd-9ddb-10d0598d5d74\" (UID: \"b00790a6-0331-44bd-9ddb-10d0598d5d74\") " Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.279496 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9przc\" (UniqueName: \"kubernetes.io/projected/b00790a6-0331-44bd-9ddb-10d0598d5d74-kube-api-access-9przc\") pod \"b00790a6-0331-44bd-9ddb-10d0598d5d74\" (UID: \"b00790a6-0331-44bd-9ddb-10d0598d5d74\") " Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.279647 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b00790a6-0331-44bd-9ddb-10d0598d5d74-secret-volume\") pod \"b00790a6-0331-44bd-9ddb-10d0598d5d74\" (UID: \"b00790a6-0331-44bd-9ddb-10d0598d5d74\") " Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.280294 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b00790a6-0331-44bd-9ddb-10d0598d5d74-config-volume" (OuterVolumeSpecName: "config-volume") pod "b00790a6-0331-44bd-9ddb-10d0598d5d74" (UID: "b00790a6-0331-44bd-9ddb-10d0598d5d74"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.285938 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00790a6-0331-44bd-9ddb-10d0598d5d74-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b00790a6-0331-44bd-9ddb-10d0598d5d74" (UID: "b00790a6-0331-44bd-9ddb-10d0598d5d74"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.287730 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b00790a6-0331-44bd-9ddb-10d0598d5d74-kube-api-access-9przc" (OuterVolumeSpecName: "kube-api-access-9przc") pod "b00790a6-0331-44bd-9ddb-10d0598d5d74" (UID: "b00790a6-0331-44bd-9ddb-10d0598d5d74"). InnerVolumeSpecName "kube-api-access-9przc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.380991 5173 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00790a6-0331-44bd-9ddb-10d0598d5d74-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.381327 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9przc\" (UniqueName: \"kubernetes.io/projected/b00790a6-0331-44bd-9ddb-10d0598d5d74-kube-api-access-9przc\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.381340 5173 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b00790a6-0331-44bd-9ddb-10d0598d5d74-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.424228 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:15 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:15 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:15 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.424310 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.768213 5173 generic.go:358] "Generic (PLEG): container finished" podID="07be13ae-949a-42e1-9366-afe32b5480f2" containerID="dbb1d27d1272e2afa0f8d1141ddfba0c03885cc494cd66a4a349fd1299db39a1" exitCode=0 Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.768343 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmw7h" event={"ID":"07be13ae-949a-42e1-9366-afe32b5480f2","Type":"ContainerDied","Data":"dbb1d27d1272e2afa0f8d1141ddfba0c03885cc494cd66a4a349fd1299db39a1"} Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.801848 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" event={"ID":"3f277bd6-ea48-4729-960f-5a2b97bbfecc","Type":"ContainerStarted","Data":"245907205bda48e06d3d5bfe7a589499facddefa96f0a5d9b52d28edb78a0e9f"} Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.801907 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" event={"ID":"3f277bd6-ea48-4729-960f-5a2b97bbfecc","Type":"ContainerStarted","Data":"0b52cd63eea125196b8d8adaf8cddd77536dce36cb814f9a2501b416545be835"} Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.801955 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.806677 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" event={"ID":"b00790a6-0331-44bd-9ddb-10d0598d5d74","Type":"ContainerDied","Data":"aadcca17c7cdce8204267d7d76d23cf1c9d8026d14a7bdca4a676df176796a6c"} Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.806716 5173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aadcca17c7cdce8204267d7d76d23cf1c9d8026d14a7bdca4a676df176796a6c" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.806808 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421480-qssgp" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.816927 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-twcnj" event={"ID":"55b770c0-e50a-4a1e-b711-5e87b1a4cc3d","Type":"ContainerStarted","Data":"e3dbd76d0e36a55e563a383903feb50aec3454e79b264f75392bf7a45f410371"} Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.826245 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" podStartSLOduration=113.826223153 podStartE2EDuration="1m53.826223153s" podCreationTimestamp="2025-12-09 14:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:15.819656698 +0000 UTC m=+138.744938965" watchObservedRunningTime="2025-12-09 14:14:15.826223153 +0000 UTC m=+138.751505400" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.843136 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-twcnj" podStartSLOduration=31.843109558 podStartE2EDuration="31.843109558s" podCreationTimestamp="2025-12-09 14:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:15.837728061 +0000 UTC m=+138.763010328" watchObservedRunningTime="2025-12-09 14:14:15.843109558 +0000 UTC m=+138.768391815" Dec 09 14:14:15 crc kubenswrapper[5173]: I1209 14:14:15.879686 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 09 14:14:16 crc kubenswrapper[5173]: I1209 14:14:16.423559 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:16 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:16 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:16 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:16 crc kubenswrapper[5173]: I1209 14:14:16.423615 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:17 crc kubenswrapper[5173]: I1209 14:14:17.344619 5173 patch_prober.go:28] interesting pod/console-64d44f6ddf-q5kgl container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 09 14:14:17 crc kubenswrapper[5173]: I1209 14:14:17.344679 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-q5kgl" podUID="a8f67fe4-59ba-4391-aa5d-ba4a8e1fe68b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 09 14:14:17 crc kubenswrapper[5173]: I1209 14:14:17.424413 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:17 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:17 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:17 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:17 crc kubenswrapper[5173]: I1209 14:14:17.424567 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:17 crc kubenswrapper[5173]: I1209 14:14:17.813534 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-54cg5"] Dec 09 14:14:17 crc kubenswrapper[5173]: I1209 14:14:17.813862 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" podUID="4751d5a1-9958-4f4f-aa73-a94b587a09b7" containerName="controller-manager" containerID="cri-o://1dba72aa6716362ddf3006bbd7ef572748927c7f075282fff51a5c6b6e1233b5" gracePeriod=30 Dec 09 14:14:17 crc kubenswrapper[5173]: I1209 14:14:17.829662 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv"] Dec 09 14:14:17 crc kubenswrapper[5173]: I1209 14:14:17.829881 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" podUID="36b504f1-6aae-4802-ab5d-ce89caf2f742" containerName="route-controller-manager" containerID="cri-o://2befee708453c170540b24c01ae539674f19508c0fea512e58573741e8dd92ef" gracePeriod=30 Dec 09 14:14:18 crc kubenswrapper[5173]: I1209 14:14:18.424466 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 14:14:18 crc kubenswrapper[5173]: [-]has-synced failed: reason withheld Dec 09 14:14:18 crc kubenswrapper[5173]: [+]process-running ok Dec 09 14:14:18 crc kubenswrapper[5173]: healthz check failed Dec 09 14:14:18 crc kubenswrapper[5173]: I1209 14:14:18.424751 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 14:14:18 crc kubenswrapper[5173]: I1209 14:14:18.500738 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kfj8k" Dec 09 14:14:18 crc kubenswrapper[5173]: E1209 14:14:18.559754 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:14:18 crc kubenswrapper[5173]: E1209 14:14:18.562565 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:14:18 crc kubenswrapper[5173]: E1209 14:14:18.565402 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:14:18 crc kubenswrapper[5173]: E1209 14:14:18.565498 5173 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" podUID="f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 09 14:14:18 crc kubenswrapper[5173]: I1209 14:14:18.838279 5173 generic.go:358] "Generic (PLEG): container finished" podID="4751d5a1-9958-4f4f-aa73-a94b587a09b7" containerID="1dba72aa6716362ddf3006bbd7ef572748927c7f075282fff51a5c6b6e1233b5" exitCode=0 Dec 09 14:14:18 crc kubenswrapper[5173]: I1209 14:14:18.838424 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" event={"ID":"4751d5a1-9958-4f4f-aa73-a94b587a09b7","Type":"ContainerDied","Data":"1dba72aa6716362ddf3006bbd7ef572748927c7f075282fff51a5c6b6e1233b5"} Dec 09 14:14:18 crc kubenswrapper[5173]: I1209 14:14:18.841752 5173 generic.go:358] "Generic (PLEG): container finished" podID="36b504f1-6aae-4802-ab5d-ce89caf2f742" containerID="2befee708453c170540b24c01ae539674f19508c0fea512e58573741e8dd92ef" exitCode=0 Dec 09 14:14:18 crc kubenswrapper[5173]: I1209 14:14:18.842011 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" event={"ID":"36b504f1-6aae-4802-ab5d-ce89caf2f742","Type":"ContainerDied","Data":"2befee708453c170540b24c01ae539674f19508c0fea512e58573741e8dd92ef"} Dec 09 14:14:19 crc kubenswrapper[5173]: I1209 14:14:19.316786 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-zhlr7" Dec 09 14:14:19 crc kubenswrapper[5173]: I1209 14:14:19.424230 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:14:19 crc kubenswrapper[5173]: I1209 14:14:19.429594 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" Dec 09 14:14:19 crc kubenswrapper[5173]: I1209 14:14:19.462259 5173 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-ppjzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 09 14:14:19 crc kubenswrapper[5173]: I1209 14:14:19.462386 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" podUID="36b504f1-6aae-4802-ab5d-ce89caf2f742" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 09 14:14:19 crc kubenswrapper[5173]: I1209 14:14:19.589908 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:14:20 crc kubenswrapper[5173]: I1209 14:14:20.548800 5173 ???:1] "http: TLS handshake error from 192.168.126.11:36226: no serving certificate available for the kubelet" Dec 09 14:14:27 crc kubenswrapper[5173]: I1209 14:14:27.596460 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:14:27 crc kubenswrapper[5173]: I1209 14:14:27.605711 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-q5kgl" Dec 09 14:14:27 crc kubenswrapper[5173]: I1209 14:14:27.865304 5173 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-54cg5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:14:27 crc kubenswrapper[5173]: I1209 14:14:27.865455 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" podUID="4751d5a1-9958-4f4f-aa73-a94b587a09b7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:14:28 crc kubenswrapper[5173]: E1209 14:14:28.559015 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:14:28 crc kubenswrapper[5173]: E1209 14:14:28.561186 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:14:28 crc kubenswrapper[5173]: E1209 14:14:28.563103 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:14:28 crc kubenswrapper[5173]: E1209 14:14:28.563145 5173 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" podUID="f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 09 14:14:30 crc kubenswrapper[5173]: I1209 14:14:30.462305 5173 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-ppjzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:14:30 crc kubenswrapper[5173]: I1209 14:14:30.463576 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" podUID="36b504f1-6aae-4802-ab5d-ce89caf2f742" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:14:30 crc kubenswrapper[5173]: I1209 14:14:30.466561 5173 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tnx4d container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:14:30 crc kubenswrapper[5173]: I1209 14:14:30.466626 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-68cf44c8b8-tnx4d" podUID="139a1ff9-4912-4a2c-b0d2-c220452ab9f2" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 14:14:31 crc kubenswrapper[5173]: I1209 14:14:31.191706 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 09 14:14:36 crc kubenswrapper[5173]: I1209 14:14:36.826633 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:14:37 crc kubenswrapper[5173]: I1209 14:14:37.864242 5173 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-54cg5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:14:37 crc kubenswrapper[5173]: I1209 14:14:37.864410 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" podUID="4751d5a1-9958-4f4f-aa73-a94b587a09b7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:14:38 crc kubenswrapper[5173]: E1209 14:14:38.559259 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:14:38 crc kubenswrapper[5173]: E1209 14:14:38.560986 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:14:38 crc kubenswrapper[5173]: E1209 14:14:38.562737 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 09 14:14:38 crc kubenswrapper[5173]: E1209 14:14:38.562881 5173 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" podUID="f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.303922 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.305159 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b00790a6-0331-44bd-9ddb-10d0598d5d74" containerName="collect-profiles" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.305180 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00790a6-0331-44bd-9ddb-10d0598d5d74" containerName="collect-profiles" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.305287 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="b00790a6-0331-44bd-9ddb-10d0598d5d74" containerName="collect-profiles" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.694150 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.694545 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-s7fzg" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.694692 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.705868 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.706577 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.709171 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2340ebd9-eba3-4b52-a6ce-a3e6fba54556-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"2340ebd9-eba3-4b52-a6ce-a3e6fba54556\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.709221 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2340ebd9-eba3-4b52-a6ce-a3e6fba54556-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"2340ebd9-eba3-4b52-a6ce-a3e6fba54556\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.810861 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2340ebd9-eba3-4b52-a6ce-a3e6fba54556-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"2340ebd9-eba3-4b52-a6ce-a3e6fba54556\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.811270 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2340ebd9-eba3-4b52-a6ce-a3e6fba54556-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"2340ebd9-eba3-4b52-a6ce-a3e6fba54556\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.810960 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2340ebd9-eba3-4b52-a6ce-a3e6fba54556-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"2340ebd9-eba3-4b52-a6ce-a3e6fba54556\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:14:39 crc kubenswrapper[5173]: I1209 14:14:39.831085 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2340ebd9-eba3-4b52-a6ce-a3e6fba54556-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"2340ebd9-eba3-4b52-a6ce-a3e6fba54556\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:14:40 crc kubenswrapper[5173]: I1209 14:14:40.029006 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:14:40 crc kubenswrapper[5173]: I1209 14:14:40.462439 5173 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-ppjzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:14:40 crc kubenswrapper[5173]: I1209 14:14:40.463499 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" podUID="36b504f1-6aae-4802-ab5d-ce89caf2f742" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:14:41 crc kubenswrapper[5173]: I1209 14:14:41.055877 5173 ???:1] "http: TLS handshake error from 192.168.126.11:45718: no serving certificate available for the kubelet" Dec 09 14:14:43 crc kubenswrapper[5173]: I1209 14:14:43.973972 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7j5wv_f7b4a60a-1ec3-4e17-91ed-abb971cdaa54/kube-multus-additional-cni-plugins/0.log" Dec 09 14:14:43 crc kubenswrapper[5173]: I1209 14:14:43.974438 5173 generic.go:358] "Generic (PLEG): container finished" podID="f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" containerID="8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1" exitCode=137 Dec 09 14:14:43 crc kubenswrapper[5173]: I1209 14:14:43.974497 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" event={"ID":"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54","Type":"ContainerDied","Data":"8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1"} Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.653785 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.720823 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-client-ca\") pod \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.720892 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-config\") pod \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.720983 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-proxy-ca-bundles\") pod \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.721008 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4751d5a1-9958-4f4f-aa73-a94b587a09b7-tmp\") pod \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.721075 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4751d5a1-9958-4f4f-aa73-a94b587a09b7-serving-cert\") pod \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.721141 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjfkn\" (UniqueName: \"kubernetes.io/projected/4751d5a1-9958-4f4f-aa73-a94b587a09b7-kube-api-access-rjfkn\") pod \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\" (UID: \"4751d5a1-9958-4f4f-aa73-a94b587a09b7\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.721163 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b47dcf89f-kxn55"] Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.721789 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4751d5a1-9958-4f4f-aa73-a94b587a09b7-tmp" (OuterVolumeSpecName: "tmp") pod "4751d5a1-9958-4f4f-aa73-a94b587a09b7" (UID: "4751d5a1-9958-4f4f-aa73-a94b587a09b7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.721854 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.721989 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-client-ca" (OuterVolumeSpecName: "client-ca") pod "4751d5a1-9958-4f4f-aa73-a94b587a09b7" (UID: "4751d5a1-9958-4f4f-aa73-a94b587a09b7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.722552 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4751d5a1-9958-4f4f-aa73-a94b587a09b7" containerName="controller-manager" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.722582 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="4751d5a1-9958-4f4f-aa73-a94b587a09b7" containerName="controller-manager" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.722597 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36b504f1-6aae-4802-ab5d-ce89caf2f742" containerName="route-controller-manager" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.722625 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b504f1-6aae-4802-ab5d-ce89caf2f742" containerName="route-controller-manager" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.722734 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="4751d5a1-9958-4f4f-aa73-a94b587a09b7" containerName="controller-manager" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.722754 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="36b504f1-6aae-4802-ab5d-ce89caf2f742" containerName="route-controller-manager" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.723805 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4751d5a1-9958-4f4f-aa73-a94b587a09b7" (UID: "4751d5a1-9958-4f4f-aa73-a94b587a09b7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.724149 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-config" (OuterVolumeSpecName: "config") pod "4751d5a1-9958-4f4f-aa73-a94b587a09b7" (UID: "4751d5a1-9958-4f4f-aa73-a94b587a09b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.732962 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b47dcf89f-kxn55"] Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.733107 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.740501 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4751d5a1-9958-4f4f-aa73-a94b587a09b7-kube-api-access-rjfkn" (OuterVolumeSpecName: "kube-api-access-rjfkn") pod "4751d5a1-9958-4f4f-aa73-a94b587a09b7" (UID: "4751d5a1-9958-4f4f-aa73-a94b587a09b7"). InnerVolumeSpecName "kube-api-access-rjfkn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.754602 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns"] Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.758942 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.769995 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4751d5a1-9958-4f4f-aa73-a94b587a09b7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4751d5a1-9958-4f4f-aa73-a94b587a09b7" (UID: "4751d5a1-9958-4f4f-aa73-a94b587a09b7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.772144 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7j5wv_f7b4a60a-1ec3-4e17-91ed-abb971cdaa54/kube-multus-additional-cni-plugins/0.log" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.772216 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.772296 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns"] Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.822370 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ggvp\" (UniqueName: \"kubernetes.io/projected/36b504f1-6aae-4802-ab5d-ce89caf2f742-kube-api-access-9ggvp\") pod \"36b504f1-6aae-4802-ab5d-ce89caf2f742\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.822470 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36b504f1-6aae-4802-ab5d-ce89caf2f742-tmp\") pod \"36b504f1-6aae-4802-ab5d-ce89caf2f742\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.822533 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-client-ca\") pod \"36b504f1-6aae-4802-ab5d-ce89caf2f742\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.822570 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-config\") pod \"36b504f1-6aae-4802-ab5d-ce89caf2f742\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.822639 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b504f1-6aae-4802-ab5d-ce89caf2f742-serving-cert\") pod \"36b504f1-6aae-4802-ab5d-ce89caf2f742\" (UID: \"36b504f1-6aae-4802-ab5d-ce89caf2f742\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.822848 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-tmp\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.822915 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-client-ca\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.822944 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-config\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.823015 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-serving-cert\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.823042 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvz65\" (UniqueName: \"kubernetes.io/projected/2cff0b6a-d823-4356-8362-b7e829522f42-kube-api-access-qvz65\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.823102 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-proxy-ca-bundles\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.823122 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cff0b6a-d823-4356-8362-b7e829522f42-serving-cert\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.823185 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkkjm\" (UniqueName: \"kubernetes.io/projected/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-kube-api-access-jkkjm\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.823223 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-client-ca\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.823291 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cff0b6a-d823-4356-8362-b7e829522f42-tmp\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.823336 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-config\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.823418 5173 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.824422 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.824446 5173 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4751d5a1-9958-4f4f-aa73-a94b587a09b7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.824458 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4751d5a1-9958-4f4f-aa73-a94b587a09b7-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.824490 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4751d5a1-9958-4f4f-aa73-a94b587a09b7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.824503 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rjfkn\" (UniqueName: \"kubernetes.io/projected/4751d5a1-9958-4f4f-aa73-a94b587a09b7-kube-api-access-rjfkn\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.825460 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36b504f1-6aae-4802-ab5d-ce89caf2f742-tmp" (OuterVolumeSpecName: "tmp") pod "36b504f1-6aae-4802-ab5d-ce89caf2f742" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.825962 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-client-ca" (OuterVolumeSpecName: "client-ca") pod "36b504f1-6aae-4802-ab5d-ce89caf2f742" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.826449 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-config" (OuterVolumeSpecName: "config") pod "36b504f1-6aae-4802-ab5d-ce89caf2f742" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.842121 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36b504f1-6aae-4802-ab5d-ce89caf2f742-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "36b504f1-6aae-4802-ab5d-ce89caf2f742" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.846466 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b504f1-6aae-4802-ab5d-ce89caf2f742-kube-api-access-9ggvp" (OuterVolumeSpecName: "kube-api-access-9ggvp") pod "36b504f1-6aae-4802-ab5d-ce89caf2f742" (UID: "36b504f1-6aae-4802-ab5d-ce89caf2f742"). InnerVolumeSpecName "kube-api-access-9ggvp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.925160 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-cni-sysctl-allowlist\") pod \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.925307 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-ready\") pod \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.925401 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-tuning-conf-dir\") pod \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.925472 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc4pj\" (UniqueName: \"kubernetes.io/projected/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-kube-api-access-gc4pj\") pod \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\" (UID: \"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54\") " Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.925582 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" (UID: "f7b4a60a-1ec3-4e17-91ed-abb971cdaa54"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.925656 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cff0b6a-d823-4356-8362-b7e829522f42-tmp\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.925730 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-config\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.926239 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cff0b6a-d823-4356-8362-b7e829522f42-tmp\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.926549 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-tmp\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.927214 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-ready" (OuterVolumeSpecName: "ready") pod "f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" (UID: "f7b4a60a-1ec3-4e17-91ed-abb971cdaa54"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.928825 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" (UID: "f7b4a60a-1ec3-4e17-91ed-abb971cdaa54"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.928956 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-client-ca\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.929002 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-config\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.929055 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-tmp\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.929074 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-serving-cert\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.930479 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-config\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.930937 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-config\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.930996 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qvz65\" (UniqueName: \"kubernetes.io/projected/2cff0b6a-d823-4356-8362-b7e829522f42-kube-api-access-qvz65\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.931587 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-proxy-ca-bundles\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.931626 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cff0b6a-d823-4356-8362-b7e829522f42-serving-cert\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.931734 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jkkjm\" (UniqueName: \"kubernetes.io/projected/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-kube-api-access-jkkjm\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.931664 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-client-ca\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.931792 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-client-ca\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.933418 5173 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.933451 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b504f1-6aae-4802-ab5d-ce89caf2f742-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.933462 5173 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.933474 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b504f1-6aae-4802-ab5d-ce89caf2f742-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.933487 5173 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.933497 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9ggvp\" (UniqueName: \"kubernetes.io/projected/36b504f1-6aae-4802-ab5d-ce89caf2f742-kube-api-access-9ggvp\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.933508 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/36b504f1-6aae-4802-ab5d-ce89caf2f742-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.933520 5173 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-ready\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.936787 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-client-ca\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.945905 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cff0b6a-d823-4356-8362-b7e829522f42-serving-cert\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.945087 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-serving-cert\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.949089 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-proxy-ca-bundles\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.954462 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-kube-api-access-gc4pj" (OuterVolumeSpecName: "kube-api-access-gc4pj") pod "f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" (UID: "f7b4a60a-1ec3-4e17-91ed-abb971cdaa54"). InnerVolumeSpecName "kube-api-access-gc4pj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.960511 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkkjm\" (UniqueName: \"kubernetes.io/projected/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-kube-api-access-jkkjm\") pod \"route-controller-manager-6754ff4c54-7hkns\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:46 crc kubenswrapper[5173]: I1209 14:14:46.966743 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvz65\" (UniqueName: \"kubernetes.io/projected/2cff0b6a-d823-4356-8362-b7e829522f42-kube-api-access-qvz65\") pod \"controller-manager-5b47dcf89f-kxn55\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.010677 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b624h" event={"ID":"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67","Type":"ContainerStarted","Data":"fc8bdad2874eb21527848709f04bd44ddbf086708b664e798b79ffae73adc100"} Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.012320 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-7j5wv_f7b4a60a-1ec3-4e17-91ed-abb971cdaa54/kube-multus-additional-cni-plugins/0.log" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.012423 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" event={"ID":"f7b4a60a-1ec3-4e17-91ed-abb971cdaa54","Type":"ContainerDied","Data":"9f5b1b1248fe758237429cd7396228391c3b52fa2305dfff9393cf993652961a"} Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.012451 5173 scope.go:117] "RemoveContainer" containerID="8c699c38e9f9c021d55d6ec04cf7bd864b37be406495b6a7ffd765cc082600c1" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.012565 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-7j5wv" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.020639 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq8bj" event={"ID":"a79afc8b-ca22-4e56-b7a9-d725b23e30ff","Type":"ContainerStarted","Data":"67225469e0f1612f0641a22816052540e2d74a5fc97e3b66321bf5ed6a0fc8e1"} Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.026914 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" event={"ID":"4751d5a1-9958-4f4f-aa73-a94b587a09b7","Type":"ContainerDied","Data":"5b50cb46dfb5e44f82920aed495d8438c2d76d2b5d1569f4f7f1f7e9bf30e46b"} Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.027074 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-54cg5" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.030694 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" event={"ID":"36b504f1-6aae-4802-ab5d-ce89caf2f742","Type":"ContainerDied","Data":"bc42dda00a7f508e88841817e7d826c38808dc85d57dfbf803cc34dbeed98380"} Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.030815 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.036785 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gc4pj\" (UniqueName: \"kubernetes.io/projected/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54-kube-api-access-gc4pj\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.039364 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95c8n" event={"ID":"8536effa-529d-4962-ab4e-0d8e1c3c4d93","Type":"ContainerStarted","Data":"1a32b04719295f2b2643e8e0f85842fd69b29ea69ab25859057863fd6f2731a4"} Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.046005 5173 scope.go:117] "RemoveContainer" containerID="1dba72aa6716362ddf3006bbd7ef572748927c7f075282fff51a5c6b6e1233b5" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.082009 5173 scope.go:117] "RemoveContainer" containerID="2befee708453c170540b24c01ae539674f19508c0fea512e58573741e8dd92ef" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.099861 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.103889 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv"] Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.114783 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.116964 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-ppjzv"] Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.130785 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-54cg5"] Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.143546 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-54cg5"] Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.188252 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7j5wv"] Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.198448 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.203098 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-7j5wv"] Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.374581 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns"] Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.412417 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b47dcf89f-kxn55"] Dec 09 14:14:47 crc kubenswrapper[5173]: W1209 14:14:47.430919 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cff0b6a_d823_4356_8362_b7e829522f42.slice/crio-c14c0918d09ee20f67d360207032bf7c2019caa0323723b932f99abd575243ac WatchSource:0}: Error finding container c14c0918d09ee20f67d360207032bf7c2019caa0323723b932f99abd575243ac: Status 404 returned error can't find the container with id c14c0918d09ee20f67d360207032bf7c2019caa0323723b932f99abd575243ac Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.882559 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b504f1-6aae-4802-ab5d-ce89caf2f742" path="/var/lib/kubelet/pods/36b504f1-6aae-4802-ab5d-ce89caf2f742/volumes" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.883732 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4751d5a1-9958-4f4f-aa73-a94b587a09b7" path="/var/lib/kubelet/pods/4751d5a1-9958-4f4f-aa73-a94b587a09b7/volumes" Dec 09 14:14:47 crc kubenswrapper[5173]: I1209 14:14:47.884340 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" path="/var/lib/kubelet/pods/f7b4a60a-1ec3-4e17-91ed-abb971cdaa54/volumes" Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.047154 5173 generic.go:358] "Generic (PLEG): container finished" podID="a79afc8b-ca22-4e56-b7a9-d725b23e30ff" containerID="67225469e0f1612f0641a22816052540e2d74a5fc97e3b66321bf5ed6a0fc8e1" exitCode=0 Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.047241 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq8bj" event={"ID":"a79afc8b-ca22-4e56-b7a9-d725b23e30ff","Type":"ContainerDied","Data":"67225469e0f1612f0641a22816052540e2d74a5fc97e3b66321bf5ed6a0fc8e1"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.053035 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"2340ebd9-eba3-4b52-a6ce-a3e6fba54556","Type":"ContainerStarted","Data":"1725e7cad9846a9d214167b47e273be3808baaf77136508ad9c08a5b412334c5"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.053078 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"2340ebd9-eba3-4b52-a6ce-a3e6fba54556","Type":"ContainerStarted","Data":"1a608fd64290cedba267eba4f00478cd96024291da9b66d0595ce32242ff5902"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.054527 5173 generic.go:358] "Generic (PLEG): container finished" podID="ae976069-cbe3-4195-8666-ec1e96e284e9" containerID="58275f4a95cc3b62c1e3fd0940879b978f28ca47c94d1166736bc5c882ffc913" exitCode=0 Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.054566 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72sct" event={"ID":"ae976069-cbe3-4195-8666-ec1e96e284e9","Type":"ContainerDied","Data":"58275f4a95cc3b62c1e3fd0940879b978f28ca47c94d1166736bc5c882ffc913"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.056454 5173 generic.go:358] "Generic (PLEG): container finished" podID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" containerID="239d3de44897a990717f1666ca9fe3da6657b9179fbbecf06cc82624c72ded4b" exitCode=0 Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.056521 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7sjh" event={"ID":"558ba319-3c10-46e3-a9e8-64e5b28db3ea","Type":"ContainerDied","Data":"239d3de44897a990717f1666ca9fe3da6657b9179fbbecf06cc82624c72ded4b"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.059863 5173 generic.go:358] "Generic (PLEG): container finished" podID="07be13ae-949a-42e1-9366-afe32b5480f2" containerID="a08d9f6480d56871633c0866a5e863c615ebfff2b48646189773d679a33bf2db" exitCode=0 Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.059965 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmw7h" event={"ID":"07be13ae-949a-42e1-9366-afe32b5480f2","Type":"ContainerDied","Data":"a08d9f6480d56871633c0866a5e863c615ebfff2b48646189773d679a33bf2db"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.063145 5173 generic.go:358] "Generic (PLEG): container finished" podID="8536effa-529d-4962-ab4e-0d8e1c3c4d93" containerID="1a32b04719295f2b2643e8e0f85842fd69b29ea69ab25859057863fd6f2731a4" exitCode=0 Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.063196 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95c8n" event={"ID":"8536effa-529d-4962-ab4e-0d8e1c3c4d93","Type":"ContainerDied","Data":"1a32b04719295f2b2643e8e0f85842fd69b29ea69ab25859057863fd6f2731a4"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.069795 5173 generic.go:358] "Generic (PLEG): container finished" podID="e9c76269-0d49-4517-be74-f6fe064135dd" containerID="c58833530c173f6119841add3113c7ae26d6232e316531209664e4c27b331c9b" exitCode=0 Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.069875 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhv4r" event={"ID":"e9c76269-0d49-4517-be74-f6fe064135dd","Type":"ContainerDied","Data":"c58833530c173f6119841add3113c7ae26d6232e316531209664e4c27b331c9b"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.071916 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" event={"ID":"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b","Type":"ContainerStarted","Data":"fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.071950 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" event={"ID":"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b","Type":"ContainerStarted","Data":"b30ab968137f41540015b4fcf6296390e5ba1f865f18b04aba6ca0bcb9581ad2"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.073452 5173 generic.go:358] "Generic (PLEG): container finished" podID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" containerID="fc8bdad2874eb21527848709f04bd44ddbf086708b664e798b79ffae73adc100" exitCode=0 Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.073525 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b624h" event={"ID":"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67","Type":"ContainerDied","Data":"fc8bdad2874eb21527848709f04bd44ddbf086708b664e798b79ffae73adc100"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.074629 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.076255 5173 generic.go:358] "Generic (PLEG): container finished" podID="d4b50aa3-6227-4e8a-8dbd-e56b695472c1" containerID="788e07d69c3506f9073bef94ac28651de5d22cce528c3084cba445a1d7a4c103" exitCode=0 Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.076321 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpqk" event={"ID":"d4b50aa3-6227-4e8a-8dbd-e56b695472c1","Type":"ContainerDied","Data":"788e07d69c3506f9073bef94ac28651de5d22cce528c3084cba445a1d7a4c103"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.078717 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" event={"ID":"2cff0b6a-d823-4356-8362-b7e829522f42","Type":"ContainerStarted","Data":"4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.078753 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" event={"ID":"2cff0b6a-d823-4356-8362-b7e829522f42","Type":"ContainerStarted","Data":"c14c0918d09ee20f67d360207032bf7c2019caa0323723b932f99abd575243ac"} Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.079091 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.183129 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=9.183047522 podStartE2EDuration="9.183047522s" podCreationTimestamp="2025-12-09 14:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:48.175755965 +0000 UTC m=+171.101038222" watchObservedRunningTime="2025-12-09 14:14:48.183047522 +0000 UTC m=+171.108329779" Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.269962 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" podStartSLOduration=11.269937667 podStartE2EDuration="11.269937667s" podCreationTimestamp="2025-12-09 14:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:48.243193514 +0000 UTC m=+171.168475781" watchObservedRunningTime="2025-12-09 14:14:48.269937667 +0000 UTC m=+171.195219914" Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.305638 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" podStartSLOduration=11.305617537 podStartE2EDuration="11.305617537s" podCreationTimestamp="2025-12-09 14:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:48.305377359 +0000 UTC m=+171.230659616" watchObservedRunningTime="2025-12-09 14:14:48.305617537 +0000 UTC m=+171.230899784" Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.720065 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:14:48 crc kubenswrapper[5173]: I1209 14:14:48.805610 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:14:49 crc kubenswrapper[5173]: I1209 14:14:49.130239 5173 generic.go:358] "Generic (PLEG): container finished" podID="2340ebd9-eba3-4b52-a6ce-a3e6fba54556" containerID="1725e7cad9846a9d214167b47e273be3808baaf77136508ad9c08a5b412334c5" exitCode=0 Dec 09 14:14:49 crc kubenswrapper[5173]: I1209 14:14:49.130346 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"2340ebd9-eba3-4b52-a6ce-a3e6fba54556","Type":"ContainerDied","Data":"1725e7cad9846a9d214167b47e273be3808baaf77136508ad9c08a5b412334c5"} Dec 09 14:14:49 crc kubenswrapper[5173]: I1209 14:14:49.132703 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72sct" event={"ID":"ae976069-cbe3-4195-8666-ec1e96e284e9","Type":"ContainerStarted","Data":"2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce"} Dec 09 14:14:49 crc kubenswrapper[5173]: I1209 14:14:49.162325 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-72sct" podStartSLOduration=6.262058882 podStartE2EDuration="38.162308971s" podCreationTimestamp="2025-12-09 14:14:11 +0000 UTC" firstStartedPulling="2025-12-09 14:14:14.747824789 +0000 UTC m=+137.673107036" lastFinishedPulling="2025-12-09 14:14:46.648074878 +0000 UTC m=+169.573357125" observedRunningTime="2025-12-09 14:14:49.160019029 +0000 UTC m=+172.085301316" watchObservedRunningTime="2025-12-09 14:14:49.162308971 +0000 UTC m=+172.087591218" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.144695 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b624h" event={"ID":"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67","Type":"ContainerStarted","Data":"4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa"} Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.146689 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpqk" event={"ID":"d4b50aa3-6227-4e8a-8dbd-e56b695472c1","Type":"ContainerStarted","Data":"551e5fd3f76f13ad4c61985070346c28c651245d542ffc9c1ae64922a22a18aa"} Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.148523 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq8bj" event={"ID":"a79afc8b-ca22-4e56-b7a9-d725b23e30ff","Type":"ContainerStarted","Data":"e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc"} Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.150636 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7sjh" event={"ID":"558ba319-3c10-46e3-a9e8-64e5b28db3ea","Type":"ContainerStarted","Data":"2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb"} Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.152612 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmw7h" event={"ID":"07be13ae-949a-42e1-9366-afe32b5480f2","Type":"ContainerStarted","Data":"253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2"} Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.154324 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95c8n" event={"ID":"8536effa-529d-4962-ab4e-0d8e1c3c4d93","Type":"ContainerStarted","Data":"c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480"} Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.157228 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhv4r" event={"ID":"e9c76269-0d49-4517-be74-f6fe064135dd","Type":"ContainerStarted","Data":"c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003"} Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.171854 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b624h" podStartSLOduration=8.190134041 podStartE2EDuration="41.1718342s" podCreationTimestamp="2025-12-09 14:14:09 +0000 UTC" firstStartedPulling="2025-12-09 14:14:13.666961438 +0000 UTC m=+136.592243685" lastFinishedPulling="2025-12-09 14:14:46.648661597 +0000 UTC m=+169.573943844" observedRunningTime="2025-12-09 14:14:50.16827067 +0000 UTC m=+173.093552927" watchObservedRunningTime="2025-12-09 14:14:50.1718342 +0000 UTC m=+173.097116447" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.196320 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xmw7h" podStartSLOduration=7.259871461 podStartE2EDuration="38.196301903s" podCreationTimestamp="2025-12-09 14:14:12 +0000 UTC" firstStartedPulling="2025-12-09 14:14:15.769295291 +0000 UTC m=+138.694577538" lastFinishedPulling="2025-12-09 14:14:46.705725733 +0000 UTC m=+169.631007980" observedRunningTime="2025-12-09 14:14:50.191836843 +0000 UTC m=+173.117119110" watchObservedRunningTime="2025-12-09 14:14:50.196301903 +0000 UTC m=+173.121584160" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.224552 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-95c8n" podStartSLOduration=7.5668525429999995 podStartE2EDuration="41.224527861s" podCreationTimestamp="2025-12-09 14:14:09 +0000 UTC" firstStartedPulling="2025-12-09 14:14:13.011108105 +0000 UTC m=+135.936390352" lastFinishedPulling="2025-12-09 14:14:46.668783423 +0000 UTC m=+169.594065670" observedRunningTime="2025-12-09 14:14:50.220276148 +0000 UTC m=+173.145558405" watchObservedRunningTime="2025-12-09 14:14:50.224527861 +0000 UTC m=+173.149810108" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.244373 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bjpqk" podStartSLOduration=8.291426616 podStartE2EDuration="41.244337718s" podCreationTimestamp="2025-12-09 14:14:09 +0000 UTC" firstStartedPulling="2025-12-09 14:14:13.712925739 +0000 UTC m=+136.638207986" lastFinishedPulling="2025-12-09 14:14:46.665836841 +0000 UTC m=+169.591119088" observedRunningTime="2025-12-09 14:14:50.238972301 +0000 UTC m=+173.164254558" watchObservedRunningTime="2025-12-09 14:14:50.244337718 +0000 UTC m=+173.169619955" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.285083 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vhv4r" podStartSLOduration=7.347613418 podStartE2EDuration="39.285055965s" podCreationTimestamp="2025-12-09 14:14:11 +0000 UTC" firstStartedPulling="2025-12-09 14:14:14.738216739 +0000 UTC m=+137.663498986" lastFinishedPulling="2025-12-09 14:14:46.675659286 +0000 UTC m=+169.600941533" observedRunningTime="2025-12-09 14:14:50.284874249 +0000 UTC m=+173.210156506" watchObservedRunningTime="2025-12-09 14:14:50.285055965 +0000 UTC m=+173.210338212" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.286708 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mq8bj" podStartSLOduration=5.226185664 podStartE2EDuration="41.286700296s" podCreationTimestamp="2025-12-09 14:14:09 +0000 UTC" firstStartedPulling="2025-12-09 14:14:10.587751492 +0000 UTC m=+133.513033739" lastFinishedPulling="2025-12-09 14:14:46.648266114 +0000 UTC m=+169.573548371" observedRunningTime="2025-12-09 14:14:50.262435011 +0000 UTC m=+173.187717278" watchObservedRunningTime="2025-12-09 14:14:50.286700296 +0000 UTC m=+173.211982543" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.291763 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.291817 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.306263 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b7sjh" podStartSLOduration=6.364041009 podStartE2EDuration="38.306241824s" podCreationTimestamp="2025-12-09 14:14:12 +0000 UTC" firstStartedPulling="2025-12-09 14:14:14.752799023 +0000 UTC m=+137.678081270" lastFinishedPulling="2025-12-09 14:14:46.694999838 +0000 UTC m=+169.620282085" observedRunningTime="2025-12-09 14:14:50.305407978 +0000 UTC m=+173.230690225" watchObservedRunningTime="2025-12-09 14:14:50.306241824 +0000 UTC m=+173.231524071" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.519846 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.603316 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2340ebd9-eba3-4b52-a6ce-a3e6fba54556-kube-api-access\") pod \"2340ebd9-eba3-4b52-a6ce-a3e6fba54556\" (UID: \"2340ebd9-eba3-4b52-a6ce-a3e6fba54556\") " Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.603414 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2340ebd9-eba3-4b52-a6ce-a3e6fba54556-kubelet-dir\") pod \"2340ebd9-eba3-4b52-a6ce-a3e6fba54556\" (UID: \"2340ebd9-eba3-4b52-a6ce-a3e6fba54556\") " Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.603729 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2340ebd9-eba3-4b52-a6ce-a3e6fba54556-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2340ebd9-eba3-4b52-a6ce-a3e6fba54556" (UID: "2340ebd9-eba3-4b52-a6ce-a3e6fba54556"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.613486 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2340ebd9-eba3-4b52-a6ce-a3e6fba54556-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2340ebd9-eba3-4b52-a6ce-a3e6fba54556" (UID: "2340ebd9-eba3-4b52-a6ce-a3e6fba54556"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.704540 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2340ebd9-eba3-4b52-a6ce-a3e6fba54556-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:50 crc kubenswrapper[5173]: I1209 14:14:50.704587 5173 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2340ebd9-eba3-4b52-a6ce-a3e6fba54556-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:51 crc kubenswrapper[5173]: I1209 14:14:51.166044 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 09 14:14:51 crc kubenswrapper[5173]: I1209 14:14:51.166031 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"2340ebd9-eba3-4b52-a6ce-a3e6fba54556","Type":"ContainerDied","Data":"1a608fd64290cedba267eba4f00478cd96024291da9b66d0595ce32242ff5902"} Dec 09 14:14:51 crc kubenswrapper[5173]: I1209 14:14:51.166954 5173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a608fd64290cedba267eba4f00478cd96024291da9b66d0595ce32242ff5902" Dec 09 14:14:51 crc kubenswrapper[5173]: I1209 14:14:51.423801 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-b624h" podUID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" containerName="registry-server" probeResult="failure" output=< Dec 09 14:14:51 crc kubenswrapper[5173]: timeout: failed to connect service ":50051" within 1s Dec 09 14:14:51 crc kubenswrapper[5173]: > Dec 09 14:14:52 crc kubenswrapper[5173]: I1209 14:14:52.829316 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:52 crc kubenswrapper[5173]: I1209 14:14:52.829693 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:52 crc kubenswrapper[5173]: I1209 14:14:52.870780 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.217993 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.345913 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.345965 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.382523 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.394134 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.394846 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" containerName="kube-multus-additional-cni-plugins" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.394872 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" containerName="kube-multus-additional-cni-plugins" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.394881 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2340ebd9-eba3-4b52-a6ce-a3e6fba54556" containerName="pruner" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.394888 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="2340ebd9-eba3-4b52-a6ce-a3e6fba54556" containerName="pruner" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.394977 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="2340ebd9-eba3-4b52-a6ce-a3e6fba54556" containerName="pruner" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.394991 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7b4a60a-1ec3-4e17-91ed-abb971cdaa54" containerName="kube-multus-additional-cni-plugins" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.801001 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.801041 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.801055 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.801070 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.801220 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.801224 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.807862 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.808509 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.954605 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.954940 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-kube-api-access\") pod \"installer-12-crc\" (UID: \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:14:53 crc kubenswrapper[5173]: I1209 14:14:53.955044 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-var-lock\") pod \"installer-12-crc\" (UID: \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:14:54 crc kubenswrapper[5173]: I1209 14:14:54.056198 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-var-lock\") pod \"installer-12-crc\" (UID: \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:14:54 crc kubenswrapper[5173]: I1209 14:14:54.056285 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:14:54 crc kubenswrapper[5173]: I1209 14:14:54.056303 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-kube-api-access\") pod \"installer-12-crc\" (UID: \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:14:54 crc kubenswrapper[5173]: I1209 14:14:54.056387 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-var-lock\") pod \"installer-12-crc\" (UID: \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:14:54 crc kubenswrapper[5173]: I1209 14:14:54.056479 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:14:54 crc kubenswrapper[5173]: I1209 14:14:54.080844 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-kube-api-access\") pod \"installer-12-crc\" (UID: \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:14:54 crc kubenswrapper[5173]: I1209 14:14:54.123313 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:14:54 crc kubenswrapper[5173]: I1209 14:14:54.239967 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:54 crc kubenswrapper[5173]: I1209 14:14:54.550133 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 09 14:14:54 crc kubenswrapper[5173]: I1209 14:14:54.835628 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xmw7h" podUID="07be13ae-949a-42e1-9366-afe32b5480f2" containerName="registry-server" probeResult="failure" output=< Dec 09 14:14:54 crc kubenswrapper[5173]: timeout: failed to connect service ":50051" within 1s Dec 09 14:14:54 crc kubenswrapper[5173]: > Dec 09 14:14:54 crc kubenswrapper[5173]: I1209 14:14:54.842662 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7sjh" podUID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" containerName="registry-server" probeResult="failure" output=< Dec 09 14:14:54 crc kubenswrapper[5173]: timeout: failed to connect service ":50051" within 1s Dec 09 14:14:54 crc kubenswrapper[5173]: > Dec 09 14:14:55 crc kubenswrapper[5173]: I1209 14:14:55.195469 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a","Type":"ContainerStarted","Data":"6263334641af4a646fc51346ccf41943faff17afe83aa62e54d84ce86ed0c653"} Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.202482 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a","Type":"ContainerStarted","Data":"9a03f5e6354372f720d66dddde10fb0f2293c71a6c18790d0282d3f029c86077"} Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.246375 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=3.246332842 podStartE2EDuration="3.246332842s" podCreationTimestamp="2025-12-09 14:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:14:56.222965505 +0000 UTC m=+179.148247772" watchObservedRunningTime="2025-12-09 14:14:56.246332842 +0000 UTC m=+179.171615099" Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.249410 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhv4r"] Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.249709 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vhv4r" podUID="e9c76269-0d49-4517-be74-f6fe064135dd" containerName="registry-server" containerID="cri-o://c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003" gracePeriod=2 Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.644317 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.693058 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2ts2\" (UniqueName: \"kubernetes.io/projected/e9c76269-0d49-4517-be74-f6fe064135dd-kube-api-access-h2ts2\") pod \"e9c76269-0d49-4517-be74-f6fe064135dd\" (UID: \"e9c76269-0d49-4517-be74-f6fe064135dd\") " Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.693160 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c76269-0d49-4517-be74-f6fe064135dd-utilities\") pod \"e9c76269-0d49-4517-be74-f6fe064135dd\" (UID: \"e9c76269-0d49-4517-be74-f6fe064135dd\") " Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.693214 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c76269-0d49-4517-be74-f6fe064135dd-catalog-content\") pod \"e9c76269-0d49-4517-be74-f6fe064135dd\" (UID: \"e9c76269-0d49-4517-be74-f6fe064135dd\") " Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.694661 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c76269-0d49-4517-be74-f6fe064135dd-utilities" (OuterVolumeSpecName: "utilities") pod "e9c76269-0d49-4517-be74-f6fe064135dd" (UID: "e9c76269-0d49-4517-be74-f6fe064135dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.700937 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c76269-0d49-4517-be74-f6fe064135dd-kube-api-access-h2ts2" (OuterVolumeSpecName: "kube-api-access-h2ts2") pod "e9c76269-0d49-4517-be74-f6fe064135dd" (UID: "e9c76269-0d49-4517-be74-f6fe064135dd"). InnerVolumeSpecName "kube-api-access-h2ts2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.708758 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c76269-0d49-4517-be74-f6fe064135dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9c76269-0d49-4517-be74-f6fe064135dd" (UID: "e9c76269-0d49-4517-be74-f6fe064135dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.795920 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2ts2\" (UniqueName: \"kubernetes.io/projected/e9c76269-0d49-4517-be74-f6fe064135dd-kube-api-access-h2ts2\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.795963 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c76269-0d49-4517-be74-f6fe064135dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:56 crc kubenswrapper[5173]: I1209 14:14:56.795976 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c76269-0d49-4517-be74-f6fe064135dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.210279 5173 generic.go:358] "Generic (PLEG): container finished" podID="e9c76269-0d49-4517-be74-f6fe064135dd" containerID="c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003" exitCode=0 Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.212311 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhv4r" Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.214300 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhv4r" event={"ID":"e9c76269-0d49-4517-be74-f6fe064135dd","Type":"ContainerDied","Data":"c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003"} Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.214395 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhv4r" event={"ID":"e9c76269-0d49-4517-be74-f6fe064135dd","Type":"ContainerDied","Data":"e17f67924b4da19ccc413b4ef22294355d8e6d0e63048f382c3ae32c17dc519f"} Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.214423 5173 scope.go:117] "RemoveContainer" containerID="c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003" Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.249819 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhv4r"] Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.252425 5173 scope.go:117] "RemoveContainer" containerID="c58833530c173f6119841add3113c7ae26d6232e316531209664e4c27b331c9b" Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.255474 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhv4r"] Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.279770 5173 scope.go:117] "RemoveContainer" containerID="03cff7e7f4ed2781991d810fbe846c645a8b4898bcaa620f83633bf64bde3b16" Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.304445 5173 scope.go:117] "RemoveContainer" containerID="c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003" Dec 09 14:14:57 crc kubenswrapper[5173]: E1209 14:14:57.304942 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003\": container with ID starting with c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003 not found: ID does not exist" containerID="c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003" Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.304994 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003"} err="failed to get container status \"c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003\": rpc error: code = NotFound desc = could not find container \"c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003\": container with ID starting with c413d81f00bf93810f6adc76d6d35e930671d3b41d2a81ed8cca14fdc3913003 not found: ID does not exist" Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.305045 5173 scope.go:117] "RemoveContainer" containerID="c58833530c173f6119841add3113c7ae26d6232e316531209664e4c27b331c9b" Dec 09 14:14:57 crc kubenswrapper[5173]: E1209 14:14:57.305310 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c58833530c173f6119841add3113c7ae26d6232e316531209664e4c27b331c9b\": container with ID starting with c58833530c173f6119841add3113c7ae26d6232e316531209664e4c27b331c9b not found: ID does not exist" containerID="c58833530c173f6119841add3113c7ae26d6232e316531209664e4c27b331c9b" Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.305339 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c58833530c173f6119841add3113c7ae26d6232e316531209664e4c27b331c9b"} err="failed to get container status \"c58833530c173f6119841add3113c7ae26d6232e316531209664e4c27b331c9b\": rpc error: code = NotFound desc = could not find container \"c58833530c173f6119841add3113c7ae26d6232e316531209664e4c27b331c9b\": container with ID starting with c58833530c173f6119841add3113c7ae26d6232e316531209664e4c27b331c9b not found: ID does not exist" Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.305482 5173 scope.go:117] "RemoveContainer" containerID="03cff7e7f4ed2781991d810fbe846c645a8b4898bcaa620f83633bf64bde3b16" Dec 09 14:14:57 crc kubenswrapper[5173]: E1209 14:14:57.305904 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03cff7e7f4ed2781991d810fbe846c645a8b4898bcaa620f83633bf64bde3b16\": container with ID starting with 03cff7e7f4ed2781991d810fbe846c645a8b4898bcaa620f83633bf64bde3b16 not found: ID does not exist" containerID="03cff7e7f4ed2781991d810fbe846c645a8b4898bcaa620f83633bf64bde3b16" Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.305929 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03cff7e7f4ed2781991d810fbe846c645a8b4898bcaa620f83633bf64bde3b16"} err="failed to get container status \"03cff7e7f4ed2781991d810fbe846c645a8b4898bcaa620f83633bf64bde3b16\": rpc error: code = NotFound desc = could not find container \"03cff7e7f4ed2781991d810fbe846c645a8b4898bcaa620f83633bf64bde3b16\": container with ID starting with 03cff7e7f4ed2781991d810fbe846c645a8b4898bcaa620f83633bf64bde3b16 not found: ID does not exist" Dec 09 14:14:57 crc kubenswrapper[5173]: I1209 14:14:57.880237 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c76269-0d49-4517-be74-f6fe064135dd" path="/var/lib/kubelet/pods/e9c76269-0d49-4517-be74-f6fe064135dd/volumes" Dec 09 14:14:59 crc kubenswrapper[5173]: I1209 14:14:59.705658 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:59 crc kubenswrapper[5173]: I1209 14:14:59.706202 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:59 crc kubenswrapper[5173]: I1209 14:14:59.749263 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:14:59 crc kubenswrapper[5173]: I1209 14:14:59.932314 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:59 crc kubenswrapper[5173]: I1209 14:14:59.932450 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:14:59 crc kubenswrapper[5173]: I1209 14:14:59.970901 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.136698 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.138544 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.151512 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj"] Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.152131 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e9c76269-0d49-4517-be74-f6fe064135dd" containerName="registry-server" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.152146 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c76269-0d49-4517-be74-f6fe064135dd" containerName="registry-server" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.152159 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e9c76269-0d49-4517-be74-f6fe064135dd" containerName="extract-utilities" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.152166 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c76269-0d49-4517-be74-f6fe064135dd" containerName="extract-utilities" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.152180 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e9c76269-0d49-4517-be74-f6fe064135dd" containerName="extract-content" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.152186 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c76269-0d49-4517-be74-f6fe064135dd" containerName="extract-content" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.152285 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="e9c76269-0d49-4517-be74-f6fe064135dd" containerName="registry-server" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.769934 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.769988 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj"] Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.770042 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.770125 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.773912 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.773952 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.811006 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.812689 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.814237 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.814521 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.846818 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d6161e7-c32b-4a76-99f7-844c344211b8-secret-volume\") pod \"collect-profiles-29421495-4bxwj\" (UID: \"8d6161e7-c32b-4a76-99f7-844c344211b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.846875 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d6161e7-c32b-4a76-99f7-844c344211b8-config-volume\") pod \"collect-profiles-29421495-4bxwj\" (UID: \"8d6161e7-c32b-4a76-99f7-844c344211b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.847304 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w49j8\" (UniqueName: \"kubernetes.io/projected/8d6161e7-c32b-4a76-99f7-844c344211b8-kube-api-access-w49j8\") pod \"collect-profiles-29421495-4bxwj\" (UID: \"8d6161e7-c32b-4a76-99f7-844c344211b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.948711 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w49j8\" (UniqueName: \"kubernetes.io/projected/8d6161e7-c32b-4a76-99f7-844c344211b8-kube-api-access-w49j8\") pod \"collect-profiles-29421495-4bxwj\" (UID: \"8d6161e7-c32b-4a76-99f7-844c344211b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.948788 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d6161e7-c32b-4a76-99f7-844c344211b8-secret-volume\") pod \"collect-profiles-29421495-4bxwj\" (UID: \"8d6161e7-c32b-4a76-99f7-844c344211b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.949993 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d6161e7-c32b-4a76-99f7-844c344211b8-config-volume\") pod \"collect-profiles-29421495-4bxwj\" (UID: \"8d6161e7-c32b-4a76-99f7-844c344211b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.950750 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d6161e7-c32b-4a76-99f7-844c344211b8-config-volume\") pod \"collect-profiles-29421495-4bxwj\" (UID: \"8d6161e7-c32b-4a76-99f7-844c344211b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.962336 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d6161e7-c32b-4a76-99f7-844c344211b8-secret-volume\") pod \"collect-profiles-29421495-4bxwj\" (UID: \"8d6161e7-c32b-4a76-99f7-844c344211b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:00 crc kubenswrapper[5173]: I1209 14:15:00.965644 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w49j8\" (UniqueName: \"kubernetes.io/projected/8d6161e7-c32b-4a76-99f7-844c344211b8-kube-api-access-w49j8\") pod \"collect-profiles-29421495-4bxwj\" (UID: \"8d6161e7-c32b-4a76-99f7-844c344211b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:01 crc kubenswrapper[5173]: I1209 14:15:01.109561 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:01 crc kubenswrapper[5173]: I1209 14:15:01.489014 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj"] Dec 09 14:15:01 crc kubenswrapper[5173]: W1209 14:15:01.493266 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d6161e7_c32b_4a76_99f7_844c344211b8.slice/crio-24ba5ac36c0411bb50b2d0ac0d1cd770eb780fbea74fca3a8c1b167f494ec662 WatchSource:0}: Error finding container 24ba5ac36c0411bb50b2d0ac0d1cd770eb780fbea74fca3a8c1b167f494ec662: Status 404 returned error can't find the container with id 24ba5ac36c0411bb50b2d0ac0d1cd770eb780fbea74fca3a8c1b167f494ec662 Dec 09 14:15:02 crc kubenswrapper[5173]: I1209 14:15:02.246140 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" event={"ID":"8d6161e7-c32b-4a76-99f7-844c344211b8","Type":"ContainerStarted","Data":"24ba5ac36c0411bb50b2d0ac0d1cd770eb780fbea74fca3a8c1b167f494ec662"} Dec 09 14:15:02 crc kubenswrapper[5173]: I1209 14:15:02.259740 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b624h"] Dec 09 14:15:02 crc kubenswrapper[5173]: I1209 14:15:02.260101 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b624h" podUID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" containerName="registry-server" containerID="cri-o://4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa" gracePeriod=2 Dec 09 14:15:02 crc kubenswrapper[5173]: I1209 14:15:02.853096 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bjpqk"] Dec 09 14:15:02 crc kubenswrapper[5173]: I1209 14:15:02.853626 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bjpqk" podUID="d4b50aa3-6227-4e8a-8dbd-e56b695472c1" containerName="registry-server" containerID="cri-o://551e5fd3f76f13ad4c61985070346c28c651245d542ffc9c1ae64922a22a18aa" gracePeriod=2 Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.174513 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.252966 5173 generic.go:358] "Generic (PLEG): container finished" podID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" containerID="4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa" exitCode=0 Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.253063 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b624h" event={"ID":"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67","Type":"ContainerDied","Data":"4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa"} Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.253136 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b624h" event={"ID":"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67","Type":"ContainerDied","Data":"c668af20b4709a77feaceb54cb54dd31413383c8759049d5535bb5c15c2a0ec0"} Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.253210 5173 scope.go:117] "RemoveContainer" containerID="4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.255302 5173 generic.go:358] "Generic (PLEG): container finished" podID="d4b50aa3-6227-4e8a-8dbd-e56b695472c1" containerID="551e5fd3f76f13ad4c61985070346c28c651245d542ffc9c1ae64922a22a18aa" exitCode=0 Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.255490 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpqk" event={"ID":"d4b50aa3-6227-4e8a-8dbd-e56b695472c1","Type":"ContainerDied","Data":"551e5fd3f76f13ad4c61985070346c28c651245d542ffc9c1ae64922a22a18aa"} Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.255539 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bjpqk" event={"ID":"d4b50aa3-6227-4e8a-8dbd-e56b695472c1","Type":"ContainerDied","Data":"3693fde0d88795c65cbeedf8dd9856f2e518a54d870ed0d0653bdc1b7689a58a"} Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.255553 5173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3693fde0d88795c65cbeedf8dd9856f2e518a54d870ed0d0653bdc1b7689a58a" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.256500 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b624h" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.260393 5173 generic.go:358] "Generic (PLEG): container finished" podID="8d6161e7-c32b-4a76-99f7-844c344211b8" containerID="6f4c57a5b7d7589dc7866f324d0ac28d97372985db9f6f5ae0e1b4d35547da40" exitCode=0 Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.260479 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" event={"ID":"8d6161e7-c32b-4a76-99f7-844c344211b8","Type":"ContainerDied","Data":"6f4c57a5b7d7589dc7866f324d0ac28d97372985db9f6f5ae0e1b4d35547da40"} Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.280158 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.282584 5173 scope.go:117] "RemoveContainer" containerID="fc8bdad2874eb21527848709f04bd44ddbf086708b664e798b79ffae73adc100" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.290977 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-utilities\") pod \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\" (UID: \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\") " Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.291034 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-catalog-content\") pod \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\" (UID: \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\") " Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.291065 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvdrf\" (UniqueName: \"kubernetes.io/projected/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-kube-api-access-xvdrf\") pod \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\" (UID: \"4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67\") " Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.292786 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-utilities" (OuterVolumeSpecName: "utilities") pod "4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" (UID: "4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.304411 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-kube-api-access-xvdrf" (OuterVolumeSpecName: "kube-api-access-xvdrf") pod "4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" (UID: "4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67"). InnerVolumeSpecName "kube-api-access-xvdrf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.323136 5173 scope.go:117] "RemoveContainer" containerID="3ab7e572641d83e5de2db6e28885fab6614346f3ca25f592485870c61d76e1ea" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.333590 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" (UID: "4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.342692 5173 scope.go:117] "RemoveContainer" containerID="4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa" Dec 09 14:15:03 crc kubenswrapper[5173]: E1209 14:15:03.343196 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa\": container with ID starting with 4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa not found: ID does not exist" containerID="4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.343232 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa"} err="failed to get container status \"4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa\": rpc error: code = NotFound desc = could not find container \"4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa\": container with ID starting with 4961d71ea52d324c0db7ab20aca831f61c1d8ebce046b42aec424914cad337aa not found: ID does not exist" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.343257 5173 scope.go:117] "RemoveContainer" containerID="fc8bdad2874eb21527848709f04bd44ddbf086708b664e798b79ffae73adc100" Dec 09 14:15:03 crc kubenswrapper[5173]: E1209 14:15:03.343550 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc8bdad2874eb21527848709f04bd44ddbf086708b664e798b79ffae73adc100\": container with ID starting with fc8bdad2874eb21527848709f04bd44ddbf086708b664e798b79ffae73adc100 not found: ID does not exist" containerID="fc8bdad2874eb21527848709f04bd44ddbf086708b664e798b79ffae73adc100" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.343575 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8bdad2874eb21527848709f04bd44ddbf086708b664e798b79ffae73adc100"} err="failed to get container status \"fc8bdad2874eb21527848709f04bd44ddbf086708b664e798b79ffae73adc100\": rpc error: code = NotFound desc = could not find container \"fc8bdad2874eb21527848709f04bd44ddbf086708b664e798b79ffae73adc100\": container with ID starting with fc8bdad2874eb21527848709f04bd44ddbf086708b664e798b79ffae73adc100 not found: ID does not exist" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.343591 5173 scope.go:117] "RemoveContainer" containerID="3ab7e572641d83e5de2db6e28885fab6614346f3ca25f592485870c61d76e1ea" Dec 09 14:15:03 crc kubenswrapper[5173]: E1209 14:15:03.344071 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ab7e572641d83e5de2db6e28885fab6614346f3ca25f592485870c61d76e1ea\": container with ID starting with 3ab7e572641d83e5de2db6e28885fab6614346f3ca25f592485870c61d76e1ea not found: ID does not exist" containerID="3ab7e572641d83e5de2db6e28885fab6614346f3ca25f592485870c61d76e1ea" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.344095 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ab7e572641d83e5de2db6e28885fab6614346f3ca25f592485870c61d76e1ea"} err="failed to get container status \"3ab7e572641d83e5de2db6e28885fab6614346f3ca25f592485870c61d76e1ea\": rpc error: code = NotFound desc = could not find container \"3ab7e572641d83e5de2db6e28885fab6614346f3ca25f592485870c61d76e1ea\": container with ID starting with 3ab7e572641d83e5de2db6e28885fab6614346f3ca25f592485870c61d76e1ea not found: ID does not exist" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.392310 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gp4k\" (UniqueName: \"kubernetes.io/projected/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-kube-api-access-2gp4k\") pod \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\" (UID: \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\") " Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.392412 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-utilities\") pod \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\" (UID: \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\") " Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.392479 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-catalog-content\") pod \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\" (UID: \"d4b50aa3-6227-4e8a-8dbd-e56b695472c1\") " Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.392927 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.392950 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.392963 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xvdrf\" (UniqueName: \"kubernetes.io/projected/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67-kube-api-access-xvdrf\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.397726 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-utilities" (OuterVolumeSpecName: "utilities") pod "d4b50aa3-6227-4e8a-8dbd-e56b695472c1" (UID: "d4b50aa3-6227-4e8a-8dbd-e56b695472c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.398873 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-kube-api-access-2gp4k" (OuterVolumeSpecName: "kube-api-access-2gp4k") pod "d4b50aa3-6227-4e8a-8dbd-e56b695472c1" (UID: "d4b50aa3-6227-4e8a-8dbd-e56b695472c1"). InnerVolumeSpecName "kube-api-access-2gp4k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.441796 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4b50aa3-6227-4e8a-8dbd-e56b695472c1" (UID: "d4b50aa3-6227-4e8a-8dbd-e56b695472c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.493909 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2gp4k\" (UniqueName: \"kubernetes.io/projected/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-kube-api-access-2gp4k\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.493961 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.493978 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b50aa3-6227-4e8a-8dbd-e56b695472c1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.586546 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b624h"] Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.588815 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b624h"] Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.823563 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.834586 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.863165 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.877681 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" path="/var/lib/kubelet/pods/4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67/volumes" Dec 09 14:15:03 crc kubenswrapper[5173]: I1209 14:15:03.880022 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.268171 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bjpqk" Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.293869 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bjpqk"] Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.296626 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bjpqk"] Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.585286 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.707528 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d6161e7-c32b-4a76-99f7-844c344211b8-secret-volume\") pod \"8d6161e7-c32b-4a76-99f7-844c344211b8\" (UID: \"8d6161e7-c32b-4a76-99f7-844c344211b8\") " Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.707654 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d6161e7-c32b-4a76-99f7-844c344211b8-config-volume\") pod \"8d6161e7-c32b-4a76-99f7-844c344211b8\" (UID: \"8d6161e7-c32b-4a76-99f7-844c344211b8\") " Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.707699 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w49j8\" (UniqueName: \"kubernetes.io/projected/8d6161e7-c32b-4a76-99f7-844c344211b8-kube-api-access-w49j8\") pod \"8d6161e7-c32b-4a76-99f7-844c344211b8\" (UID: \"8d6161e7-c32b-4a76-99f7-844c344211b8\") " Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.708392 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d6161e7-c32b-4a76-99f7-844c344211b8-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d6161e7-c32b-4a76-99f7-844c344211b8" (UID: "8d6161e7-c32b-4a76-99f7-844c344211b8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.708887 5173 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d6161e7-c32b-4a76-99f7-844c344211b8-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.713059 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6161e7-c32b-4a76-99f7-844c344211b8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d6161e7-c32b-4a76-99f7-844c344211b8" (UID: "8d6161e7-c32b-4a76-99f7-844c344211b8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.713535 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6161e7-c32b-4a76-99f7-844c344211b8-kube-api-access-w49j8" (OuterVolumeSpecName: "kube-api-access-w49j8") pod "8d6161e7-c32b-4a76-99f7-844c344211b8" (UID: "8d6161e7-c32b-4a76-99f7-844c344211b8"). InnerVolumeSpecName "kube-api-access-w49j8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.810320 5173 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d6161e7-c32b-4a76-99f7-844c344211b8-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:04 crc kubenswrapper[5173]: I1209 14:15:04.810426 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w49j8\" (UniqueName: \"kubernetes.io/projected/8d6161e7-c32b-4a76-99f7-844c344211b8-kube-api-access-w49j8\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:05 crc kubenswrapper[5173]: I1209 14:15:05.274016 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" event={"ID":"8d6161e7-c32b-4a76-99f7-844c344211b8","Type":"ContainerDied","Data":"24ba5ac36c0411bb50b2d0ac0d1cd770eb780fbea74fca3a8c1b167f494ec662"} Dec 09 14:15:05 crc kubenswrapper[5173]: I1209 14:15:05.274374 5173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ba5ac36c0411bb50b2d0ac0d1cd770eb780fbea74fca3a8c1b167f494ec662" Dec 09 14:15:05 crc kubenswrapper[5173]: I1209 14:15:05.274326 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421495-4bxwj" Dec 09 14:15:05 crc kubenswrapper[5173]: I1209 14:15:05.879136 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b50aa3-6227-4e8a-8dbd-e56b695472c1" path="/var/lib/kubelet/pods/d4b50aa3-6227-4e8a-8dbd-e56b695472c1/volumes" Dec 09 14:15:06 crc kubenswrapper[5173]: I1209 14:15:06.650860 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b7sjh"] Dec 09 14:15:06 crc kubenswrapper[5173]: I1209 14:15:06.651199 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b7sjh" podUID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" containerName="registry-server" containerID="cri-o://2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb" gracePeriod=2 Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.081668 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.142074 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558ba319-3c10-46e3-a9e8-64e5b28db3ea-utilities\") pod \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\" (UID: \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\") " Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.142163 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2trx\" (UniqueName: \"kubernetes.io/projected/558ba319-3c10-46e3-a9e8-64e5b28db3ea-kube-api-access-k2trx\") pod \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\" (UID: \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\") " Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.142278 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558ba319-3c10-46e3-a9e8-64e5b28db3ea-catalog-content\") pod \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\" (UID: \"558ba319-3c10-46e3-a9e8-64e5b28db3ea\") " Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.144409 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/558ba319-3c10-46e3-a9e8-64e5b28db3ea-utilities" (OuterVolumeSpecName: "utilities") pod "558ba319-3c10-46e3-a9e8-64e5b28db3ea" (UID: "558ba319-3c10-46e3-a9e8-64e5b28db3ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.151567 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/558ba319-3c10-46e3-a9e8-64e5b28db3ea-kube-api-access-k2trx" (OuterVolumeSpecName: "kube-api-access-k2trx") pod "558ba319-3c10-46e3-a9e8-64e5b28db3ea" (UID: "558ba319-3c10-46e3-a9e8-64e5b28db3ea"). InnerVolumeSpecName "kube-api-access-k2trx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.231002 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/558ba319-3c10-46e3-a9e8-64e5b28db3ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "558ba319-3c10-46e3-a9e8-64e5b28db3ea" (UID: "558ba319-3c10-46e3-a9e8-64e5b28db3ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.244030 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558ba319-3c10-46e3-a9e8-64e5b28db3ea-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.244083 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558ba319-3c10-46e3-a9e8-64e5b28db3ea-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.244095 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k2trx\" (UniqueName: \"kubernetes.io/projected/558ba319-3c10-46e3-a9e8-64e5b28db3ea-kube-api-access-k2trx\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.288264 5173 generic.go:358] "Generic (PLEG): container finished" podID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" containerID="2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb" exitCode=0 Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.288365 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7sjh" event={"ID":"558ba319-3c10-46e3-a9e8-64e5b28db3ea","Type":"ContainerDied","Data":"2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb"} Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.288433 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7sjh" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.288438 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7sjh" event={"ID":"558ba319-3c10-46e3-a9e8-64e5b28db3ea","Type":"ContainerDied","Data":"4642d50fb978f1a53c8b7c0b6e0d08cfc263b6de74e4eac21d37ad9b962f0e5f"} Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.288511 5173 scope.go:117] "RemoveContainer" containerID="2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.309704 5173 scope.go:117] "RemoveContainer" containerID="239d3de44897a990717f1666ca9fe3da6657b9179fbbecf06cc82624c72ded4b" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.324852 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b7sjh"] Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.325911 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b7sjh"] Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.339461 5173 scope.go:117] "RemoveContainer" containerID="c3d5fb8e679e3e2df9c3ef96ba6f0cd27a26689a9d0ae8f1c837fc3e281a4e26" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.356453 5173 scope.go:117] "RemoveContainer" containerID="2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb" Dec 09 14:15:07 crc kubenswrapper[5173]: E1209 14:15:07.356812 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb\": container with ID starting with 2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb not found: ID does not exist" containerID="2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.356840 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb"} err="failed to get container status \"2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb\": rpc error: code = NotFound desc = could not find container \"2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb\": container with ID starting with 2cace769e0c3a8ca7bf1f601f0c629653d2e21672cb1dd5cadb9daa3f554feeb not found: ID does not exist" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.356860 5173 scope.go:117] "RemoveContainer" containerID="239d3de44897a990717f1666ca9fe3da6657b9179fbbecf06cc82624c72ded4b" Dec 09 14:15:07 crc kubenswrapper[5173]: E1209 14:15:07.357034 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"239d3de44897a990717f1666ca9fe3da6657b9179fbbecf06cc82624c72ded4b\": container with ID starting with 239d3de44897a990717f1666ca9fe3da6657b9179fbbecf06cc82624c72ded4b not found: ID does not exist" containerID="239d3de44897a990717f1666ca9fe3da6657b9179fbbecf06cc82624c72ded4b" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.357055 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"239d3de44897a990717f1666ca9fe3da6657b9179fbbecf06cc82624c72ded4b"} err="failed to get container status \"239d3de44897a990717f1666ca9fe3da6657b9179fbbecf06cc82624c72ded4b\": rpc error: code = NotFound desc = could not find container \"239d3de44897a990717f1666ca9fe3da6657b9179fbbecf06cc82624c72ded4b\": container with ID starting with 239d3de44897a990717f1666ca9fe3da6657b9179fbbecf06cc82624c72ded4b not found: ID does not exist" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.357068 5173 scope.go:117] "RemoveContainer" containerID="c3d5fb8e679e3e2df9c3ef96ba6f0cd27a26689a9d0ae8f1c837fc3e281a4e26" Dec 09 14:15:07 crc kubenswrapper[5173]: E1209 14:15:07.357226 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3d5fb8e679e3e2df9c3ef96ba6f0cd27a26689a9d0ae8f1c837fc3e281a4e26\": container with ID starting with c3d5fb8e679e3e2df9c3ef96ba6f0cd27a26689a9d0ae8f1c837fc3e281a4e26 not found: ID does not exist" containerID="c3d5fb8e679e3e2df9c3ef96ba6f0cd27a26689a9d0ae8f1c837fc3e281a4e26" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.357252 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3d5fb8e679e3e2df9c3ef96ba6f0cd27a26689a9d0ae8f1c837fc3e281a4e26"} err="failed to get container status \"c3d5fb8e679e3e2df9c3ef96ba6f0cd27a26689a9d0ae8f1c837fc3e281a4e26\": rpc error: code = NotFound desc = could not find container \"c3d5fb8e679e3e2df9c3ef96ba6f0cd27a26689a9d0ae8f1c837fc3e281a4e26\": container with ID starting with c3d5fb8e679e3e2df9c3ef96ba6f0cd27a26689a9d0ae8f1c837fc3e281a4e26 not found: ID does not exist" Dec 09 14:15:07 crc kubenswrapper[5173]: I1209 14:15:07.880071 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" path="/var/lib/kubelet/pods/558ba319-3c10-46e3-a9e8-64e5b28db3ea/volumes" Dec 09 14:15:17 crc kubenswrapper[5173]: I1209 14:15:17.695051 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b47dcf89f-kxn55"] Dec 09 14:15:17 crc kubenswrapper[5173]: I1209 14:15:17.695917 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" podUID="2cff0b6a-d823-4356-8362-b7e829522f42" containerName="controller-manager" containerID="cri-o://4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4" gracePeriod=30 Dec 09 14:15:17 crc kubenswrapper[5173]: I1209 14:15:17.698302 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns"] Dec 09 14:15:17 crc kubenswrapper[5173]: I1209 14:15:17.698506 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" podUID="ff1be0cb-1c62-419f-a8ad-e98bf0cc194b" containerName="route-controller-manager" containerID="cri-o://fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97" gracePeriod=30 Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.199202 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.224914 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g"] Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225564 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" containerName="registry-server" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225584 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" containerName="registry-server" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225602 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ff1be0cb-1c62-419f-a8ad-e98bf0cc194b" containerName="route-controller-manager" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225610 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1be0cb-1c62-419f-a8ad-e98bf0cc194b" containerName="route-controller-manager" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225620 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" containerName="extract-utilities" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225626 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" containerName="extract-utilities" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225638 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" containerName="registry-server" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225645 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" containerName="registry-server" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225651 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4b50aa3-6227-4e8a-8dbd-e56b695472c1" containerName="registry-server" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225658 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b50aa3-6227-4e8a-8dbd-e56b695472c1" containerName="registry-server" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225675 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" containerName="extract-content" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225682 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" containerName="extract-content" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225692 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4b50aa3-6227-4e8a-8dbd-e56b695472c1" containerName="extract-content" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225700 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b50aa3-6227-4e8a-8dbd-e56b695472c1" containerName="extract-content" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225710 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d6161e7-c32b-4a76-99f7-844c344211b8" containerName="collect-profiles" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225719 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6161e7-c32b-4a76-99f7-844c344211b8" containerName="collect-profiles" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225734 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" containerName="extract-content" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225741 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" containerName="extract-content" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225750 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" containerName="extract-utilities" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225756 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" containerName="extract-utilities" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225768 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4b50aa3-6227-4e8a-8dbd-e56b695472c1" containerName="extract-utilities" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225774 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b50aa3-6227-4e8a-8dbd-e56b695472c1" containerName="extract-utilities" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225864 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d6161e7-c32b-4a76-99f7-844c344211b8" containerName="collect-profiles" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225879 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="558ba319-3c10-46e3-a9e8-64e5b28db3ea" containerName="registry-server" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225888 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="d4b50aa3-6227-4e8a-8dbd-e56b695472c1" containerName="registry-server" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225913 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="ff1be0cb-1c62-419f-a8ad-e98bf0cc194b" containerName="route-controller-manager" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.225924 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="4723c0a4-6d37-4bcd-9189-4a9d1f6cfb67" containerName="registry-server" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.229089 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.235097 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g"] Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.277889 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-serving-cert\") pod \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.277979 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkkjm\" (UniqueName: \"kubernetes.io/projected/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-kube-api-access-jkkjm\") pod \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.278081 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-config\") pod \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.278108 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-tmp\") pod \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.278203 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-client-ca\") pod \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\" (UID: \"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b\") " Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.278669 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-tmp" (OuterVolumeSpecName: "tmp") pod "ff1be0cb-1c62-419f-a8ad-e98bf0cc194b" (UID: "ff1be0cb-1c62-419f-a8ad-e98bf0cc194b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.278892 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-client-ca" (OuterVolumeSpecName: "client-ca") pod "ff1be0cb-1c62-419f-a8ad-e98bf0cc194b" (UID: "ff1be0cb-1c62-419f-a8ad-e98bf0cc194b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.278938 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-config" (OuterVolumeSpecName: "config") pod "ff1be0cb-1c62-419f-a8ad-e98bf0cc194b" (UID: "ff1be0cb-1c62-419f-a8ad-e98bf0cc194b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.292776 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ff1be0cb-1c62-419f-a8ad-e98bf0cc194b" (UID: "ff1be0cb-1c62-419f-a8ad-e98bf0cc194b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.292814 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-kube-api-access-jkkjm" (OuterVolumeSpecName: "kube-api-access-jkkjm") pod "ff1be0cb-1c62-419f-a8ad-e98bf0cc194b" (UID: "ff1be0cb-1c62-419f-a8ad-e98bf0cc194b"). InnerVolumeSpecName "kube-api-access-jkkjm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.343769 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.361618 5173 generic.go:358] "Generic (PLEG): container finished" podID="ff1be0cb-1c62-419f-a8ad-e98bf0cc194b" containerID="fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97" exitCode=0 Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.361747 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.361788 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" event={"ID":"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b","Type":"ContainerDied","Data":"fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97"} Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.361819 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" event={"ID":"ff1be0cb-1c62-419f-a8ad-e98bf0cc194b","Type":"ContainerDied","Data":"b30ab968137f41540015b4fcf6296390e5ba1f865f18b04aba6ca0bcb9581ad2"} Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.361838 5173 scope.go:117] "RemoveContainer" containerID="fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.369074 5173 generic.go:358] "Generic (PLEG): container finished" podID="2cff0b6a-d823-4356-8362-b7e829522f42" containerID="4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4" exitCode=0 Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.369204 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" event={"ID":"2cff0b6a-d823-4356-8362-b7e829522f42","Type":"ContainerDied","Data":"4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4"} Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.369231 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" event={"ID":"2cff0b6a-d823-4356-8362-b7e829522f42","Type":"ContainerDied","Data":"c14c0918d09ee20f67d360207032bf7c2019caa0323723b932f99abd575243ac"} Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.369281 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.373112 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr"] Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.373859 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2cff0b6a-d823-4356-8362-b7e829522f42" containerName="controller-manager" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.374681 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cff0b6a-d823-4356-8362-b7e829522f42" containerName="controller-manager" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.374974 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="2cff0b6a-d823-4356-8362-b7e829522f42" containerName="controller-manager" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.379433 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-config\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.379499 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-serving-cert\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.379529 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-tmp\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.379562 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-client-ca\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.379647 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmt6n\" (UniqueName: \"kubernetes.io/projected/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-kube-api-access-cmt6n\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.379878 5173 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.379909 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.379922 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jkkjm\" (UniqueName: \"kubernetes.io/projected/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-kube-api-access-jkkjm\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.379933 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.379942 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.380561 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.408433 5173 scope.go:117] "RemoveContainer" containerID="fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97" Dec 09 14:15:18 crc kubenswrapper[5173]: E1209 14:15:18.408871 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97\": container with ID starting with fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97 not found: ID does not exist" containerID="fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.408910 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97"} err="failed to get container status \"fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97\": rpc error: code = NotFound desc = could not find container \"fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97\": container with ID starting with fe6c528281f676e4bccd4918395818e89c6ce4ca9525d875ad4d432b7d387b97 not found: ID does not exist" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.408937 5173 scope.go:117] "RemoveContainer" containerID="4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.410424 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr"] Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.423254 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns"] Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.427672 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns"] Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.429793 5173 scope.go:117] "RemoveContainer" containerID="4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4" Dec 09 14:15:18 crc kubenswrapper[5173]: E1209 14:15:18.430318 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4\": container with ID starting with 4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4 not found: ID does not exist" containerID="4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.430378 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4"} err="failed to get container status \"4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4\": rpc error: code = NotFound desc = could not find container \"4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4\": container with ID starting with 4d8349bd239253b6f686e05f29bf3750b1694bba5ac8173fe2ad4e5be9ad53f4 not found: ID does not exist" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.481009 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cff0b6a-d823-4356-8362-b7e829522f42-tmp\") pod \"2cff0b6a-d823-4356-8362-b7e829522f42\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.481179 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-client-ca\") pod \"2cff0b6a-d823-4356-8362-b7e829522f42\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.481229 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvz65\" (UniqueName: \"kubernetes.io/projected/2cff0b6a-d823-4356-8362-b7e829522f42-kube-api-access-qvz65\") pod \"2cff0b6a-d823-4356-8362-b7e829522f42\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.481289 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-config\") pod \"2cff0b6a-d823-4356-8362-b7e829522f42\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.481317 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-proxy-ca-bundles\") pod \"2cff0b6a-d823-4356-8362-b7e829522f42\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.481501 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cff0b6a-d823-4356-8362-b7e829522f42-tmp" (OuterVolumeSpecName: "tmp") pod "2cff0b6a-d823-4356-8362-b7e829522f42" (UID: "2cff0b6a-d823-4356-8362-b7e829522f42"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.481591 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cff0b6a-d823-4356-8362-b7e829522f42-serving-cert\") pod \"2cff0b6a-d823-4356-8362-b7e829522f42\" (UID: \"2cff0b6a-d823-4356-8362-b7e829522f42\") " Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.481869 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-serving-cert\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.481979 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2cff0b6a-d823-4356-8362-b7e829522f42" (UID: "2cff0b6a-d823-4356-8362-b7e829522f42"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.482018 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-tmp\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.482087 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wt2p\" (UniqueName: \"kubernetes.io/projected/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-kube-api-access-6wt2p\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.482223 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-config\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.482251 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-client-ca" (OuterVolumeSpecName: "client-ca") pod "2cff0b6a-d823-4356-8362-b7e829522f42" (UID: "2cff0b6a-d823-4356-8362-b7e829522f42"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.482378 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-config" (OuterVolumeSpecName: "config") pod "2cff0b6a-d823-4356-8362-b7e829522f42" (UID: "2cff0b6a-d823-4356-8362-b7e829522f42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.482292 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-serving-cert\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.482938 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-tmp\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.482973 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-proxy-ca-bundles\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.483184 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-client-ca\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.483270 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-client-ca\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.483379 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cmt6n\" (UniqueName: \"kubernetes.io/projected/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-kube-api-access-cmt6n\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.483424 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-tmp\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.483449 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-config\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.483436 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-config\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.483552 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.483569 5173 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.483584 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cff0b6a-d823-4356-8362-b7e829522f42-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.483597 5173 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cff0b6a-d823-4356-8362-b7e829522f42-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.484156 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-client-ca\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.486697 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cff0b6a-d823-4356-8362-b7e829522f42-kube-api-access-qvz65" (OuterVolumeSpecName: "kube-api-access-qvz65") pod "2cff0b6a-d823-4356-8362-b7e829522f42" (UID: "2cff0b6a-d823-4356-8362-b7e829522f42"). InnerVolumeSpecName "kube-api-access-qvz65". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.486938 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cff0b6a-d823-4356-8362-b7e829522f42-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2cff0b6a-d823-4356-8362-b7e829522f42" (UID: "2cff0b6a-d823-4356-8362-b7e829522f42"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.487010 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-serving-cert\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.502080 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmt6n\" (UniqueName: \"kubernetes.io/projected/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-kube-api-access-cmt6n\") pod \"route-controller-manager-656684887c-zgq8g\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.570177 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.587134 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-tmp\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.587195 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wt2p\" (UniqueName: \"kubernetes.io/projected/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-kube-api-access-6wt2p\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.587600 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-tmp\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.587798 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-proxy-ca-bundles\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.588647 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-client-ca\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.589205 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-proxy-ca-bundles\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.589260 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-client-ca\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.589323 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-config\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.590869 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-config\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.591415 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-serving-cert\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.591507 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cff0b6a-d823-4356-8362-b7e829522f42-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.591519 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qvz65\" (UniqueName: \"kubernetes.io/projected/2cff0b6a-d823-4356-8362-b7e829522f42-kube-api-access-qvz65\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.594723 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-serving-cert\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.606763 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wt2p\" (UniqueName: \"kubernetes.io/projected/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-kube-api-access-6wt2p\") pod \"controller-manager-7cf8b4c577-5fsvr\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.702724 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b47dcf89f-kxn55"] Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.705460 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b47dcf89f-kxn55"] Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.713045 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.769899 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g"] Dec 09 14:15:18 crc kubenswrapper[5173]: W1209 14:15:18.783641 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb60b11a3_6068_4c59_bc81_8bc06ba89d0e.slice/crio-7c4f972db7b02eea985a2b72867f2dd5d4f587dcc8b048d6edcccea646f34b1d WatchSource:0}: Error finding container 7c4f972db7b02eea985a2b72867f2dd5d4f587dcc8b048d6edcccea646f34b1d: Status 404 returned error can't find the container with id 7c4f972db7b02eea985a2b72867f2dd5d4f587dcc8b048d6edcccea646f34b1d Dec 09 14:15:18 crc kubenswrapper[5173]: I1209 14:15:18.934730 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr"] Dec 09 14:15:18 crc kubenswrapper[5173]: W1209 14:15:18.942106 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d43d7f6_269b_42cb_a5c5_ee55ebc08c58.slice/crio-b1acaab327f59f114e1c9a7efe8dd33c7d80d7320e7e8515656cab6c93d121ad WatchSource:0}: Error finding container b1acaab327f59f114e1c9a7efe8dd33c7d80d7320e7e8515656cab6c93d121ad: Status 404 returned error can't find the container with id b1acaab327f59f114e1c9a7efe8dd33c7d80d7320e7e8515656cab6c93d121ad Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.075687 5173 patch_prober.go:28] interesting pod/route-controller-manager-6754ff4c54-7hkns container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.075771 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6754ff4c54-7hkns" podUID="ff1be0cb-1c62-419f-a8ad-e98bf0cc194b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.079853 5173 patch_prober.go:28] interesting pod/controller-manager-5b47dcf89f-kxn55 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": context deadline exceeded" start-of-body= Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.079915 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b47dcf89f-kxn55" podUID="2cff0b6a-d823-4356-8362-b7e829522f42" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": context deadline exceeded" Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.401386 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" event={"ID":"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58","Type":"ContainerStarted","Data":"29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b"} Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.401449 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" event={"ID":"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58","Type":"ContainerStarted","Data":"b1acaab327f59f114e1c9a7efe8dd33c7d80d7320e7e8515656cab6c93d121ad"} Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.402377 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.404061 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" event={"ID":"b60b11a3-6068-4c59-bc81-8bc06ba89d0e","Type":"ContainerStarted","Data":"cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5"} Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.404117 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" event={"ID":"b60b11a3-6068-4c59-bc81-8bc06ba89d0e","Type":"ContainerStarted","Data":"7c4f972db7b02eea985a2b72867f2dd5d4f587dcc8b048d6edcccea646f34b1d"} Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.404694 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.430699 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" podStartSLOduration=2.430677718 podStartE2EDuration="2.430677718s" podCreationTimestamp="2025-12-09 14:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:15:19.426204949 +0000 UTC m=+202.351487206" watchObservedRunningTime="2025-12-09 14:15:19.430677718 +0000 UTC m=+202.355959965" Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.445842 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" podStartSLOduration=2.445812228 podStartE2EDuration="2.445812228s" podCreationTimestamp="2025-12-09 14:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:15:19.443250379 +0000 UTC m=+202.368532636" watchObservedRunningTime="2025-12-09 14:15:19.445812228 +0000 UTC m=+202.371094475" Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.876703 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cff0b6a-d823-4356-8362-b7e829522f42" path="/var/lib/kubelet/pods/2cff0b6a-d823-4356-8362-b7e829522f42/volumes" Dec 09 14:15:19 crc kubenswrapper[5173]: I1209 14:15:19.877277 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1be0cb-1c62-419f-a8ad-e98bf0cc194b" path="/var/lib/kubelet/pods/ff1be0cb-1c62-419f-a8ad-e98bf0cc194b/volumes" Dec 09 14:15:20 crc kubenswrapper[5173]: I1209 14:15:20.070309 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:15:20 crc kubenswrapper[5173]: I1209 14:15:20.095390 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:15:20 crc kubenswrapper[5173]: I1209 14:15:20.529189 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-znppb"] Dec 09 14:15:22 crc kubenswrapper[5173]: I1209 14:15:22.036556 5173 ???:1] "http: TLS handshake error from 192.168.126.11:44720: no serving certificate available for the kubelet" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.378177 5173 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.393711 5173 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.393770 5173 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.393962 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394380 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394410 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394420 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394425 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394435 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394441 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394450 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394456 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394467 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394472 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394479 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394484 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394492 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394497 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394506 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394511 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394520 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394527 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394568 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394574 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394712 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f" gracePeriod=15 Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394773 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7" gracePeriod=15 Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394832 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd" gracePeriod=15 Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394793 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://795472ed85f8907273dd1d43c9bbbee761c69d5332067589f55aea901cd28a66" gracePeriod=15 Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.394898 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d" gracePeriod=15 Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.395321 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.395344 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.395389 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.395405 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.395417 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.395427 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.395439 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.395456 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.395470 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.400682 5173 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.416961 5173 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.435247 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.495659 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.495717 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.495748 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.495785 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.495830 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.495861 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.495926 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.495952 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.495981 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.496013 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.597192 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.597241 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.597271 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.597301 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.597325 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.597298 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.597378 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.597473 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.597920 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.597940 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.597972 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.598056 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.598097 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.598136 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.598209 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.598287 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.598330 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.598471 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.598479 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.598529 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: I1209 14:15:33.729206 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:15:33 crc kubenswrapper[5173]: W1209 14:15:33.755018 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-239caa21bd98380daf601278871c595f99545a54e00ab811c4a51e50a6bbbb27 WatchSource:0}: Error finding container 239caa21bd98380daf601278871c595f99545a54e00ab811c4a51e50a6bbbb27: Status 404 returned error can't find the container with id 239caa21bd98380daf601278871c595f99545a54e00ab811c4a51e50a6bbbb27 Dec 09 14:15:33 crc kubenswrapper[5173]: E1209 14:15:33.758744 5173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f91a8f242f095 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:15:33.757530261 +0000 UTC m=+216.682812538,LastTimestamp:2025-12-09 14:15:33.757530261 +0000 UTC m=+216.682812538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.498884 5173 generic.go:358] "Generic (PLEG): container finished" podID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" containerID="9a03f5e6354372f720d66dddde10fb0f2293c71a6c18790d0282d3f029c86077" exitCode=0 Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.498987 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a","Type":"ContainerDied","Data":"9a03f5e6354372f720d66dddde10fb0f2293c71a6c18790d0282d3f029c86077"} Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.500328 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.500701 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.500887 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"1504847409cadb7ba6aff1f485523aa7088fda4b0fb30ee6746a959898516f24"} Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.500978 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"239caa21bd98380daf601278871c595f99545a54e00ab811c4a51e50a6bbbb27"} Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.501461 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.503312 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.504631 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.505319 5173 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="795472ed85f8907273dd1d43c9bbbee761c69d5332067589f55aea901cd28a66" exitCode=0 Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.505361 5173 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7" exitCode=0 Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.505373 5173 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd" exitCode=0 Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.505381 5173 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d" exitCode=2 Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.505402 5173 scope.go:117] "RemoveContainer" containerID="c33dc1dfd257c4de340c743482e065958fc65e7753e6e93d7ffb5edbabb3751d" Dec 09 14:15:34 crc kubenswrapper[5173]: I1209 14:15:34.506563 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.522616 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.794909 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.796152 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.797257 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.797782 5173 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.798387 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.867129 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.867849 5173 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.868194 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.868672 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.926858 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-var-lock\") pod \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\" (UID: \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\") " Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.926993 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927011 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-var-lock" (OuterVolumeSpecName: "var-lock") pod "089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" (UID: "089f8d89-d0b0-4ebd-a28c-d5a0da357b1a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927024 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-kube-api-access\") pod \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\" (UID: \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\") " Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927088 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927141 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927207 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927293 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927319 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-kubelet-dir\") pod \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\" (UID: \"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a\") " Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927372 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927459 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927487 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" (UID: "089f8d89-d0b0-4ebd-a28c-d5a0da357b1a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927471 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927648 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.927998 5173 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.928031 5173 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.928044 5173 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.928057 5173 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.928071 5173 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.928083 5173 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.929210 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:15:35 crc kubenswrapper[5173]: I1209 14:15:35.932280 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" (UID: "089f8d89-d0b0-4ebd-a28c-d5a0da357b1a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.028871 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/089f8d89-d0b0-4ebd-a28c-d5a0da357b1a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.028912 5173 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.541468 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"089f8d89-d0b0-4ebd-a28c-d5a0da357b1a","Type":"ContainerDied","Data":"6263334641af4a646fc51346ccf41943faff17afe83aa62e54d84ce86ed0c653"} Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.541805 5173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6263334641af4a646fc51346ccf41943faff17afe83aa62e54d84ce86ed0c653" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.541639 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.544876 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.545880 5173 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f" exitCode=0 Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.545998 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.546107 5173 scope.go:117] "RemoveContainer" containerID="795472ed85f8907273dd1d43c9bbbee761c69d5332067589f55aea901cd28a66" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.546769 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.548532 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.549031 5173 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.565600 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.565989 5173 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.566441 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.566832 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.567021 5173 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.567177 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.569142 5173 scope.go:117] "RemoveContainer" containerID="d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.591283 5173 scope.go:117] "RemoveContainer" containerID="3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.612128 5173 scope.go:117] "RemoveContainer" containerID="649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.634108 5173 scope.go:117] "RemoveContainer" containerID="454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.651791 5173 scope.go:117] "RemoveContainer" containerID="cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.702345 5173 scope.go:117] "RemoveContainer" containerID="795472ed85f8907273dd1d43c9bbbee761c69d5332067589f55aea901cd28a66" Dec 09 14:15:36 crc kubenswrapper[5173]: E1209 14:15:36.703316 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"795472ed85f8907273dd1d43c9bbbee761c69d5332067589f55aea901cd28a66\": container with ID starting with 795472ed85f8907273dd1d43c9bbbee761c69d5332067589f55aea901cd28a66 not found: ID does not exist" containerID="795472ed85f8907273dd1d43c9bbbee761c69d5332067589f55aea901cd28a66" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.703380 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"795472ed85f8907273dd1d43c9bbbee761c69d5332067589f55aea901cd28a66"} err="failed to get container status \"795472ed85f8907273dd1d43c9bbbee761c69d5332067589f55aea901cd28a66\": rpc error: code = NotFound desc = could not find container \"795472ed85f8907273dd1d43c9bbbee761c69d5332067589f55aea901cd28a66\": container with ID starting with 795472ed85f8907273dd1d43c9bbbee761c69d5332067589f55aea901cd28a66 not found: ID does not exist" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.703406 5173 scope.go:117] "RemoveContainer" containerID="d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7" Dec 09 14:15:36 crc kubenswrapper[5173]: E1209 14:15:36.703700 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7\": container with ID starting with d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7 not found: ID does not exist" containerID="d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.703815 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7"} err="failed to get container status \"d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7\": rpc error: code = NotFound desc = could not find container \"d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7\": container with ID starting with d0b999a76deedaf160000710bd40eb4171574e9c92cec99ef031f67d7c7a53b7 not found: ID does not exist" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.703915 5173 scope.go:117] "RemoveContainer" containerID="3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd" Dec 09 14:15:36 crc kubenswrapper[5173]: E1209 14:15:36.704502 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd\": container with ID starting with 3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd not found: ID does not exist" containerID="3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.704603 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd"} err="failed to get container status \"3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd\": rpc error: code = NotFound desc = could not find container \"3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd\": container with ID starting with 3589e1dbcec96018c18a370b6a259cd8df94bc482fef1dcb05c98424b68b88bd not found: ID does not exist" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.704700 5173 scope.go:117] "RemoveContainer" containerID="649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d" Dec 09 14:15:36 crc kubenswrapper[5173]: E1209 14:15:36.705192 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d\": container with ID starting with 649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d not found: ID does not exist" containerID="649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.705219 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d"} err="failed to get container status \"649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d\": rpc error: code = NotFound desc = could not find container \"649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d\": container with ID starting with 649d47492a5ef4b97ee359cc418b0a0bd30483798ea6e7a190d0c4971c19d25d not found: ID does not exist" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.705235 5173 scope.go:117] "RemoveContainer" containerID="454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f" Dec 09 14:15:36 crc kubenswrapper[5173]: E1209 14:15:36.705555 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f\": container with ID starting with 454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f not found: ID does not exist" containerID="454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.705579 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f"} err="failed to get container status \"454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f\": rpc error: code = NotFound desc = could not find container \"454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f\": container with ID starting with 454119eb878ba00854e1077ac62c0eb7d5861c90fe90460b2fcbacd153cda69f not found: ID does not exist" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.705595 5173 scope.go:117] "RemoveContainer" containerID="cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20" Dec 09 14:15:36 crc kubenswrapper[5173]: E1209 14:15:36.705928 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\": container with ID starting with cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20 not found: ID does not exist" containerID="cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20" Dec 09 14:15:36 crc kubenswrapper[5173]: I1209 14:15:36.705957 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20"} err="failed to get container status \"cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\": rpc error: code = NotFound desc = could not find container \"cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20\": container with ID starting with cc9de9dbff9d04b7dcf3f1766b0e7d94b301e0fd6f08da58b9edcd3a306c6a20 not found: ID does not exist" Dec 09 14:15:37 crc kubenswrapper[5173]: I1209 14:15:37.875554 5173 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:37 crc kubenswrapper[5173]: I1209 14:15:37.876686 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:37 crc kubenswrapper[5173]: I1209 14:15:37.877193 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:37 crc kubenswrapper[5173]: I1209 14:15:37.882904 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 09 14:15:40 crc kubenswrapper[5173]: E1209 14:15:40.735915 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:40Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:40Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:40Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T14:15:40Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:40 crc kubenswrapper[5173]: E1209 14:15:40.736432 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:40 crc kubenswrapper[5173]: E1209 14:15:40.736661 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:40 crc kubenswrapper[5173]: E1209 14:15:40.736890 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:40 crc kubenswrapper[5173]: E1209 14:15:40.737176 5173 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:40 crc kubenswrapper[5173]: E1209 14:15:40.737196 5173 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 09 14:15:41 crc kubenswrapper[5173]: E1209 14:15:41.305436 5173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f91a8f242f095 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 14:15:33.757530261 +0000 UTC m=+216.682812538,LastTimestamp:2025-12-09 14:15:33.757530261 +0000 UTC m=+216.682812538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 14:15:42 crc kubenswrapper[5173]: E1209 14:15:42.600623 5173 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:42 crc kubenswrapper[5173]: E1209 14:15:42.601055 5173 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:42 crc kubenswrapper[5173]: E1209 14:15:42.601338 5173 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:42 crc kubenswrapper[5173]: E1209 14:15:42.601644 5173 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:42 crc kubenswrapper[5173]: E1209 14:15:42.601870 5173 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:42 crc kubenswrapper[5173]: I1209 14:15:42.601890 5173 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 09 14:15:42 crc kubenswrapper[5173]: E1209 14:15:42.602112 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="200ms" Dec 09 14:15:42 crc kubenswrapper[5173]: E1209 14:15:42.803641 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="400ms" Dec 09 14:15:43 crc kubenswrapper[5173]: E1209 14:15:43.204817 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="800ms" Dec 09 14:15:44 crc kubenswrapper[5173]: E1209 14:15:44.005846 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="1.6s" Dec 09 14:15:45 crc kubenswrapper[5173]: I1209 14:15:45.558767 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" containerName="oauth-openshift" containerID="cri-o://e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10" gracePeriod=15 Dec 09 14:15:45 crc kubenswrapper[5173]: E1209 14:15:45.607233 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="3.2s" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.029440 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.030695 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.031149 5173 status_manager.go:895] "Failed to get status for pod" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-znppb\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.031589 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.163752 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5zgw\" (UniqueName: \"kubernetes.io/projected/fb78d03e-40d5-4c32-9f47-49a596f9b55a-kube-api-access-d5zgw\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.163897 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-login\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.163937 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-audit-policies\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.163974 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-cliconfig\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.164024 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-error\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.164100 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-idp-0-file-data\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.164138 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-service-ca\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.164220 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-trusted-ca-bundle\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.164261 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-ocp-branding-template\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.164321 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-router-certs\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.164393 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-serving-cert\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.164453 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb78d03e-40d5-4c32-9f47-49a596f9b55a-audit-dir\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.164615 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-provider-selection\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.165155 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.165217 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.165240 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb78d03e-40d5-4c32-9f47-49a596f9b55a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.165288 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-session\") pod \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\" (UID: \"fb78d03e-40d5-4c32-9f47-49a596f9b55a\") " Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.165599 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.165875 5173 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb78d03e-40d5-4c32-9f47-49a596f9b55a-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.165904 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.165918 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.165931 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.166913 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.171719 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb78d03e-40d5-4c32-9f47-49a596f9b55a-kube-api-access-d5zgw" (OuterVolumeSpecName: "kube-api-access-d5zgw") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "kube-api-access-d5zgw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.172667 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.173342 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.173525 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.174151 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.174881 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.175249 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.176077 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.176393 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "fb78d03e-40d5-4c32-9f47-49a596f9b55a" (UID: "fb78d03e-40d5-4c32-9f47-49a596f9b55a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.267746 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.267800 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.267821 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d5zgw\" (UniqueName: \"kubernetes.io/projected/fb78d03e-40d5-4c32-9f47-49a596f9b55a-kube-api-access-d5zgw\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.267838 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.267860 5173 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb78d03e-40d5-4c32-9f47-49a596f9b55a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.267878 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.267895 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.267913 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.267932 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.267949 5173 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb78d03e-40d5-4c32-9f47-49a596f9b55a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.606123 5173 generic.go:358] "Generic (PLEG): container finished" podID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" containerID="e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10" exitCode=0 Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.606405 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" event={"ID":"fb78d03e-40d5-4c32-9f47-49a596f9b55a","Type":"ContainerDied","Data":"e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10"} Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.606456 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" event={"ID":"fb78d03e-40d5-4c32-9f47-49a596f9b55a","Type":"ContainerDied","Data":"ed0943e30ab4b6e6898b10aa75d98d22b0e41b3d4c9b898d197df03a1889e490"} Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.606514 5173 scope.go:117] "RemoveContainer" containerID="e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.606844 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.608217 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.609611 5173 status_manager.go:895] "Failed to get status for pod" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-znppb\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.610111 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.612649 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.612732 5173 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="3b658001d1e245caf6af8b7e926021b65cf14fe05e112bd9f5ef1b3b34dbc397" exitCode=1 Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.612916 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"3b658001d1e245caf6af8b7e926021b65cf14fe05e112bd9f5ef1b3b34dbc397"} Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.613970 5173 scope.go:117] "RemoveContainer" containerID="3b658001d1e245caf6af8b7e926021b65cf14fe05e112bd9f5ef1b3b34dbc397" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.614129 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.614652 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.615820 5173 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.616257 5173 status_manager.go:895] "Failed to get status for pod" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-znppb\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.637825 5173 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.638239 5173 status_manager.go:895] "Failed to get status for pod" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-znppb\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.638643 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.639034 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.640420 5173 scope.go:117] "RemoveContainer" containerID="e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10" Dec 09 14:15:46 crc kubenswrapper[5173]: E1209 14:15:46.640987 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10\": container with ID starting with e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10 not found: ID does not exist" containerID="e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10" Dec 09 14:15:46 crc kubenswrapper[5173]: I1209 14:15:46.641191 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10"} err="failed to get container status \"e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10\": rpc error: code = NotFound desc = could not find container \"e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10\": container with ID starting with e511d8c90cc7e9608814843c73e55ec40e14fb96f3c08ff6449e7fb80f648e10 not found: ID does not exist" Dec 09 14:15:47 crc kubenswrapper[5173]: I1209 14:15:47.624804 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:15:47 crc kubenswrapper[5173]: I1209 14:15:47.625529 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"2c20f879726cae46daf10e5c53a01d9375004fec2a963d179868d01917952b99"} Dec 09 14:15:47 crc kubenswrapper[5173]: I1209 14:15:47.627205 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:47 crc kubenswrapper[5173]: I1209 14:15:47.627992 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:47 crc kubenswrapper[5173]: I1209 14:15:47.628511 5173 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:47 crc kubenswrapper[5173]: I1209 14:15:47.628985 5173 status_manager.go:895] "Failed to get status for pod" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-znppb\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:47 crc kubenswrapper[5173]: I1209 14:15:47.879544 5173 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:47 crc kubenswrapper[5173]: I1209 14:15:47.880547 5173 status_manager.go:895] "Failed to get status for pod" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-znppb\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:47 crc kubenswrapper[5173]: I1209 14:15:47.881082 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:47 crc kubenswrapper[5173]: I1209 14:15:47.881739 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:48 crc kubenswrapper[5173]: E1209 14:15:48.808231 5173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="6.4s" Dec 09 14:15:48 crc kubenswrapper[5173]: I1209 14:15:48.870732 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:48 crc kubenswrapper[5173]: I1209 14:15:48.872257 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:48 crc kubenswrapper[5173]: I1209 14:15:48.873272 5173 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:48 crc kubenswrapper[5173]: I1209 14:15:48.874063 5173 status_manager.go:895] "Failed to get status for pod" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-znppb\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:48 crc kubenswrapper[5173]: I1209 14:15:48.875133 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:48 crc kubenswrapper[5173]: I1209 14:15:48.888949 5173 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f29a9c75-e9f9-4865-b566-af6dce495e92" Dec 09 14:15:48 crc kubenswrapper[5173]: I1209 14:15:48.888993 5173 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f29a9c75-e9f9-4865-b566-af6dce495e92" Dec 09 14:15:48 crc kubenswrapper[5173]: E1209 14:15:48.889808 5173 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:48 crc kubenswrapper[5173]: I1209 14:15:48.890248 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:48 crc kubenswrapper[5173]: W1209 14:15:48.909243 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-d0609fb5fbca17250c6b72625391e2b34821c508f3203c4d807cd270cdb74bab WatchSource:0}: Error finding container d0609fb5fbca17250c6b72625391e2b34821c508f3203c4d807cd270cdb74bab: Status 404 returned error can't find the container with id d0609fb5fbca17250c6b72625391e2b34821c508f3203c4d807cd270cdb74bab Dec 09 14:15:48 crc kubenswrapper[5173]: E1209 14:15:48.916493 5173 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" volumeName="registry-storage" Dec 09 14:15:49 crc kubenswrapper[5173]: I1209 14:15:49.085155 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:15:49 crc kubenswrapper[5173]: I1209 14:15:49.085239 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:15:49 crc kubenswrapper[5173]: I1209 14:15:49.642444 5173 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="969ee490c14d4e158c5a1abb4157411fd8de0ff6fe7c2b0de9a6475b6352e039" exitCode=0 Dec 09 14:15:49 crc kubenswrapper[5173]: I1209 14:15:49.642542 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"969ee490c14d4e158c5a1abb4157411fd8de0ff6fe7c2b0de9a6475b6352e039"} Dec 09 14:15:49 crc kubenswrapper[5173]: I1209 14:15:49.642867 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d0609fb5fbca17250c6b72625391e2b34821c508f3203c4d807cd270cdb74bab"} Dec 09 14:15:49 crc kubenswrapper[5173]: I1209 14:15:49.643234 5173 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f29a9c75-e9f9-4865-b566-af6dce495e92" Dec 09 14:15:49 crc kubenswrapper[5173]: I1209 14:15:49.643248 5173 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f29a9c75-e9f9-4865-b566-af6dce495e92" Dec 09 14:15:49 crc kubenswrapper[5173]: E1209 14:15:49.643791 5173 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:49 crc kubenswrapper[5173]: I1209 14:15:49.643853 5173 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:49 crc kubenswrapper[5173]: I1209 14:15:49.644316 5173 status_manager.go:895] "Failed to get status for pod" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" pod="openshift-authentication/oauth-openshift-66458b6674-znppb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-znppb\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:49 crc kubenswrapper[5173]: I1209 14:15:49.644958 5173 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:49 crc kubenswrapper[5173]: I1209 14:15:49.645290 5173 status_manager.go:895] "Failed to get status for pod" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Dec 09 14:15:50 crc kubenswrapper[5173]: I1209 14:15:50.650334 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"04defbcd41322b289a261421c03f4e66629be7ceb14ad39fa2814d1b22ee22ad"} Dec 09 14:15:50 crc kubenswrapper[5173]: I1209 14:15:50.650692 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"aa6aecb66e751c5a0ffed16e56b987db18b6842de4b201f9c3fcebd7bb986dac"} Dec 09 14:15:50 crc kubenswrapper[5173]: I1209 14:15:50.650706 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1d78a8d9caa6f432209ad7901fde8062773fd6a000780397dbb4eb77a43478c1"} Dec 09 14:15:50 crc kubenswrapper[5173]: I1209 14:15:50.650718 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"25d4d98adde0b551770a4dffece3e04f49da8bf1645fb764920a27a209f8d2d2"} Dec 09 14:15:51 crc kubenswrapper[5173]: I1209 14:15:51.657492 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"741a1a9d0c6995029a59c80f183e2fa3e37591de937c5101a2388efa5f55ce4f"} Dec 09 14:15:51 crc kubenswrapper[5173]: I1209 14:15:51.657665 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:51 crc kubenswrapper[5173]: I1209 14:15:51.657795 5173 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f29a9c75-e9f9-4865-b566-af6dce495e92" Dec 09 14:15:51 crc kubenswrapper[5173]: I1209 14:15:51.657813 5173 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f29a9c75-e9f9-4865-b566-af6dce495e92" Dec 09 14:15:52 crc kubenswrapper[5173]: I1209 14:15:52.157176 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:15:52 crc kubenswrapper[5173]: I1209 14:15:52.160903 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:15:52 crc kubenswrapper[5173]: I1209 14:15:52.307938 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:15:53 crc kubenswrapper[5173]: I1209 14:15:53.890755 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:53 crc kubenswrapper[5173]: I1209 14:15:53.891219 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:53 crc kubenswrapper[5173]: I1209 14:15:53.901431 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:56 crc kubenswrapper[5173]: I1209 14:15:56.666549 5173 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:56 crc kubenswrapper[5173]: I1209 14:15:56.666575 5173 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:56 crc kubenswrapper[5173]: I1209 14:15:56.683928 5173 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f29a9c75-e9f9-4865-b566-af6dce495e92" Dec 09 14:15:56 crc kubenswrapper[5173]: I1209 14:15:56.683961 5173 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f29a9c75-e9f9-4865-b566-af6dce495e92" Dec 09 14:15:56 crc kubenswrapper[5173]: I1209 14:15:56.687482 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:15:57 crc kubenswrapper[5173]: I1209 14:15:57.695719 5173 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f29a9c75-e9f9-4865-b566-af6dce495e92" Dec 09 14:15:57 crc kubenswrapper[5173]: I1209 14:15:57.696036 5173 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f29a9c75-e9f9-4865-b566-af6dce495e92" Dec 09 14:15:57 crc kubenswrapper[5173]: I1209 14:15:57.896099 5173 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="68dc68eb-d42e-486f-8765-2de6132418a2" Dec 09 14:16:03 crc kubenswrapper[5173]: I1209 14:16:03.678122 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 14:16:06 crc kubenswrapper[5173]: I1209 14:16:06.577987 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 09 14:16:06 crc kubenswrapper[5173]: I1209 14:16:06.653624 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 09 14:16:07 crc kubenswrapper[5173]: I1209 14:16:07.027014 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 09 14:16:07 crc kubenswrapper[5173]: I1209 14:16:07.570661 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 09 14:16:07 crc kubenswrapper[5173]: I1209 14:16:07.840395 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 09 14:16:07 crc kubenswrapper[5173]: I1209 14:16:07.859911 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:07 crc kubenswrapper[5173]: I1209 14:16:07.910920 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 09 14:16:08 crc kubenswrapper[5173]: I1209 14:16:08.269976 5173 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:16:08 crc kubenswrapper[5173]: I1209 14:16:08.304893 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 09 14:16:08 crc kubenswrapper[5173]: I1209 14:16:08.306513 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 09 14:16:08 crc kubenswrapper[5173]: I1209 14:16:08.394779 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 09 14:16:08 crc kubenswrapper[5173]: I1209 14:16:08.406458 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 09 14:16:08 crc kubenswrapper[5173]: I1209 14:16:08.566391 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 09 14:16:08 crc kubenswrapper[5173]: I1209 14:16:08.758240 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 09 14:16:08 crc kubenswrapper[5173]: I1209 14:16:08.876756 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.178915 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.202172 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.242889 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.294403 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.354488 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.420176 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.474134 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.493513 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.495004 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.527447 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.589203 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.635726 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.701606 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.736907 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.744463 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.865148 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.884126 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 09 14:16:09 crc kubenswrapper[5173]: I1209 14:16:09.955128 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.151950 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.200968 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.209842 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.242288 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.353908 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.496946 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.516999 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.614683 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.677028 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.691726 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.760045 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.914171 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.934252 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 09 14:16:10 crc kubenswrapper[5173]: I1209 14:16:10.995782 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.050833 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.095980 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.109066 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.165856 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.185741 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.215695 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.302936 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.395642 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.448416 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.454525 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.506468 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.569514 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.689954 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.906097 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.913131 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.951567 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.988430 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 09 14:16:11 crc kubenswrapper[5173]: I1209 14:16:11.994017 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.032169 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.060325 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.090845 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.125660 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.191186 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.304543 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.376715 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.446423 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.499458 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.503158 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.524280 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.585313 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.661701 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.746710 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.806990 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.863759 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.871734 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.893910 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.906197 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.927376 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.978876 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 09 14:16:12 crc kubenswrapper[5173]: I1209 14:16:12.999475 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.080144 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.114561 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.183445 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.303018 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.317012 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.348206 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.378334 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.435705 5173 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.436501 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.436487123 podStartE2EDuration="40.436487123s" podCreationTimestamp="2025-12-09 14:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:15:56.679475446 +0000 UTC m=+239.604757693" watchObservedRunningTime="2025-12-09 14:16:13.436487123 +0000 UTC m=+256.361769380" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.442233 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-66458b6674-znppb"] Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.442328 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-6bd946fff4-jrsnr"] Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.442753 5173 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f29a9c75-e9f9-4865-b566-af6dce495e92" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.442779 5173 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f29a9c75-e9f9-4865-b566-af6dce495e92" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.443349 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" containerName="installer" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.443402 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" containerName="installer" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.443428 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" containerName="oauth-openshift" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.443440 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" containerName="oauth-openshift" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.443616 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" containerName="oauth-openshift" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.443640 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="089f8d89-d0b0-4ebd-a28c-d5a0da357b1a" containerName="installer" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.471420 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.471441 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.473868 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.474373 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.474409 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.474794 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.475146 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.475584 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.475772 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.475832 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.476010 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.476229 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.476614 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.476882 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.481657 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.482930 5173 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.489269 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.496171 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.508646 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.508624894 podStartE2EDuration="17.508624894s" podCreationTimestamp="2025-12-09 14:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:13.505456066 +0000 UTC m=+256.430738333" watchObservedRunningTime="2025-12-09 14:16:13.508624894 +0000 UTC m=+256.433907141" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.636677 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-566nc\" (UniqueName: \"kubernetes.io/projected/f0ae540a-27f2-43ea-862a-7689e48746f5-kube-api-access-566nc\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.636723 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.636753 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-user-template-error\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.636772 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f0ae540a-27f2-43ea-862a-7689e48746f5-audit-policies\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.636882 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0ae540a-27f2-43ea-862a-7689e48746f5-audit-dir\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.637003 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.637045 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.637151 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.637203 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-user-template-login\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.637256 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.637309 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.637387 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.637465 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-session\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.637524 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.739224 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-session\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.739629 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.739699 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-566nc\" (UniqueName: \"kubernetes.io/projected/f0ae540a-27f2-43ea-862a-7689e48746f5-kube-api-access-566nc\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.739734 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.739773 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-user-template-error\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.739802 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f0ae540a-27f2-43ea-862a-7689e48746f5-audit-policies\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.739841 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0ae540a-27f2-43ea-862a-7689e48746f5-audit-dir\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.739876 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.739900 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.739932 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.739964 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-user-template-login\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.739993 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.740027 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.740057 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.740633 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0ae540a-27f2-43ea-862a-7689e48746f5-audit-dir\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.741247 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.741491 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f0ae540a-27f2-43ea-862a-7689e48746f5-audit-policies\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.741505 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.741503 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.746737 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.747400 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.747815 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.747835 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.748458 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-user-template-error\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.748715 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-user-template-login\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.749915 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.750250 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f0ae540a-27f2-43ea-862a-7689e48746f5-v4-0-config-system-session\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.761072 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-566nc\" (UniqueName: \"kubernetes.io/projected/f0ae540a-27f2-43ea-862a-7689e48746f5-kube-api-access-566nc\") pod \"oauth-openshift-6bd946fff4-jrsnr\" (UID: \"f0ae540a-27f2-43ea-862a-7689e48746f5\") " pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.789691 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.789851 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.877736 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb78d03e-40d5-4c32-9f47-49a596f9b55a" path="/var/lib/kubelet/pods/fb78d03e-40d5-4c32-9f47-49a596f9b55a/volumes" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.906520 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:13 crc kubenswrapper[5173]: I1209 14:16:13.918284 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.059202 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.080380 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.139780 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.169545 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.311289 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.371484 5173 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.398436 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.535158 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.541044 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.703070 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.703680 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.725612 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.740038 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.765590 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.795873 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.908274 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 09 14:16:14 crc kubenswrapper[5173]: I1209 14:16:14.911826 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.014316 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.098368 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.140903 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.199294 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.244033 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.291590 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.318141 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.354608 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.390124 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.434060 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.453759 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.454883 5173 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.520440 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.544819 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.662986 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.787200 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 09 14:16:15 crc kubenswrapper[5173]: I1209 14:16:15.912033 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.021193 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.095000 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.132865 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.141462 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.226321 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.257609 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.264939 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.280500 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.284071 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.298699 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.408733 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.448456 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6bd946fff4-jrsnr"] Dec 09 14:16:16 crc kubenswrapper[5173]: W1209 14:16:16.449521 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0ae540a_27f2_43ea_862a_7689e48746f5.slice/crio-debe5e85ebac38ca2c83039da907002a34fa9dd61570ec9874322b10b7c832b5 WatchSource:0}: Error finding container debe5e85ebac38ca2c83039da907002a34fa9dd61570ec9874322b10b7c832b5: Status 404 returned error can't find the container with id debe5e85ebac38ca2c83039da907002a34fa9dd61570ec9874322b10b7c832b5 Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.470476 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.567850 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.614245 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.636916 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.666071 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.743473 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.757893 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.778727 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.797167 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" event={"ID":"f0ae540a-27f2-43ea-862a-7689e48746f5","Type":"ContainerStarted","Data":"6a60a881a9b545cbd6da07430e895dbccfa34cbece6ec33faa8c0f1c7a5302d2"} Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.797222 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" event={"ID":"f0ae540a-27f2-43ea-862a-7689e48746f5","Type":"ContainerStarted","Data":"debe5e85ebac38ca2c83039da907002a34fa9dd61570ec9874322b10b7c832b5"} Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.798390 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.800867 5173 patch_prober.go:28] interesting pod/oauth-openshift-6bd946fff4-jrsnr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": dial tcp 10.217.0.63:6443: connect: connection refused" start-of-body= Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.800948 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" podUID="f0ae540a-27f2-43ea-862a-7689e48746f5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": dial tcp 10.217.0.63:6443: connect: connection refused" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.821307 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" podStartSLOduration=56.82127811 podStartE2EDuration="56.82127811s" podCreationTimestamp="2025-12-09 14:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:16.817786982 +0000 UTC m=+259.743069239" watchObservedRunningTime="2025-12-09 14:16:16.82127811 +0000 UTC m=+259.746560377" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.821939 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.865449 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.866296 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.883606 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 09 14:16:16 crc kubenswrapper[5173]: I1209 14:16:16.924733 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.013239 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.054874 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.131026 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.159592 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.194748 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.321076 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.442493 5173 ???:1] "http: TLS handshake error from 192.168.126.11:39166: no serving certificate available for the kubelet" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.598540 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.644345 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.680849 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.741806 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g"] Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.742205 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" podUID="b60b11a3-6068-4c59-bc81-8bc06ba89d0e" containerName="route-controller-manager" containerID="cri-o://cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5" gracePeriod=30 Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.750878 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr"] Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.751269 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" podUID="7d43d7f6-269b-42cb-a5c5-ee55ebc08c58" containerName="controller-manager" containerID="cri-o://29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b" gracePeriod=30 Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.754252 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.773992 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:17 crc kubenswrapper[5173]: I1209 14:16:17.931306 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.021899 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.023931 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.036126 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.153201 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.249949 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.258718 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.279303 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5"] Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.284587 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d43d7f6-269b-42cb-a5c5-ee55ebc08c58" containerName="controller-manager" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.284621 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d43d7f6-269b-42cb-a5c5-ee55ebc08c58" containerName="controller-manager" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.284657 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b60b11a3-6068-4c59-bc81-8bc06ba89d0e" containerName="route-controller-manager" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.284666 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60b11a3-6068-4c59-bc81-8bc06ba89d0e" containerName="route-controller-manager" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.284882 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="b60b11a3-6068-4c59-bc81-8bc06ba89d0e" containerName="route-controller-manager" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.284899 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d43d7f6-269b-42cb-a5c5-ee55ebc08c58" containerName="controller-manager" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.293811 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5"] Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.294180 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.299739 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6bd946fff4-jrsnr" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.312428 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r"] Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.324222 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.329583 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r"] Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.329784 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405273 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-serving-cert\") pod \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405372 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-client-ca\") pod \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405414 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-config\") pod \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405467 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-proxy-ca-bundles\") pod \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405492 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-config\") pod \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405550 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-tmp\") pod \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405577 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wt2p\" (UniqueName: \"kubernetes.io/projected/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-kube-api-access-6wt2p\") pod \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405602 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmt6n\" (UniqueName: \"kubernetes.io/projected/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-kube-api-access-cmt6n\") pod \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405648 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-tmp\") pod \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\" (UID: \"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58\") " Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405757 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-serving-cert\") pod \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405787 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-client-ca\") pod \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\" (UID: \"b60b11a3-6068-4c59-bc81-8bc06ba89d0e\") " Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405956 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-config\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.405994 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-proxy-ca-bundles\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.406067 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16c15feb-fabd-4063-bee9-cd3b28e64eb0-tmp\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.406108 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l65km\" (UniqueName: \"kubernetes.io/projected/16c15feb-fabd-4063-bee9-cd3b28e64eb0-kube-api-access-l65km\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.406168 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c15feb-fabd-4063-bee9-cd3b28e64eb0-serving-cert\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.406203 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-client-ca\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.407516 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-tmp" (OuterVolumeSpecName: "tmp") pod "7d43d7f6-269b-42cb-a5c5-ee55ebc08c58" (UID: "7d43d7f6-269b-42cb-a5c5-ee55ebc08c58"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.408225 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d43d7f6-269b-42cb-a5c5-ee55ebc08c58" (UID: "7d43d7f6-269b-42cb-a5c5-ee55ebc08c58"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.408504 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d43d7f6-269b-42cb-a5c5-ee55ebc08c58" (UID: "7d43d7f6-269b-42cb-a5c5-ee55ebc08c58"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.409059 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-tmp" (OuterVolumeSpecName: "tmp") pod "b60b11a3-6068-4c59-bc81-8bc06ba89d0e" (UID: "b60b11a3-6068-4c59-bc81-8bc06ba89d0e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.409121 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-client-ca" (OuterVolumeSpecName: "client-ca") pod "b60b11a3-6068-4c59-bc81-8bc06ba89d0e" (UID: "b60b11a3-6068-4c59-bc81-8bc06ba89d0e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.409215 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-config" (OuterVolumeSpecName: "config") pod "7d43d7f6-269b-42cb-a5c5-ee55ebc08c58" (UID: "7d43d7f6-269b-42cb-a5c5-ee55ebc08c58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.409196 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-config" (OuterVolumeSpecName: "config") pod "b60b11a3-6068-4c59-bc81-8bc06ba89d0e" (UID: "b60b11a3-6068-4c59-bc81-8bc06ba89d0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.412379 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b60b11a3-6068-4c59-bc81-8bc06ba89d0e" (UID: "b60b11a3-6068-4c59-bc81-8bc06ba89d0e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.414569 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-kube-api-access-cmt6n" (OuterVolumeSpecName: "kube-api-access-cmt6n") pod "b60b11a3-6068-4c59-bc81-8bc06ba89d0e" (UID: "b60b11a3-6068-4c59-bc81-8bc06ba89d0e"). InnerVolumeSpecName "kube-api-access-cmt6n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.417746 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-kube-api-access-6wt2p" (OuterVolumeSpecName: "kube-api-access-6wt2p") pod "7d43d7f6-269b-42cb-a5c5-ee55ebc08c58" (UID: "7d43d7f6-269b-42cb-a5c5-ee55ebc08c58"). InnerVolumeSpecName "kube-api-access-6wt2p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.417915 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d43d7f6-269b-42cb-a5c5-ee55ebc08c58" (UID: "7d43d7f6-269b-42cb-a5c5-ee55ebc08c58"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507610 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c15feb-fabd-4063-bee9-cd3b28e64eb0-serving-cert\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507655 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-client-ca\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507691 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-config\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507710 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-proxy-ca-bundles\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507736 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tf6n\" (UniqueName: \"kubernetes.io/projected/81221f2e-9c60-4997-b56a-a109daae77ae-kube-api-access-2tf6n\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507758 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81221f2e-9c60-4997-b56a-a109daae77ae-serving-cert\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507775 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81221f2e-9c60-4997-b56a-a109daae77ae-config\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507789 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81221f2e-9c60-4997-b56a-a109daae77ae-client-ca\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507825 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81221f2e-9c60-4997-b56a-a109daae77ae-tmp\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507842 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16c15feb-fabd-4063-bee9-cd3b28e64eb0-tmp\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507869 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l65km\" (UniqueName: \"kubernetes.io/projected/16c15feb-fabd-4063-bee9-cd3b28e64eb0-kube-api-access-l65km\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507916 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507925 5173 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507934 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507942 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507950 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6wt2p\" (UniqueName: \"kubernetes.io/projected/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-kube-api-access-6wt2p\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507957 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cmt6n\" (UniqueName: \"kubernetes.io/projected/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-kube-api-access-cmt6n\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507965 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507973 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507981 5173 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b60b11a3-6068-4c59-bc81-8bc06ba89d0e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507988 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.507995 5173 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.509326 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16c15feb-fabd-4063-bee9-cd3b28e64eb0-tmp\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.509970 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-client-ca\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.509983 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-proxy-ca-bundles\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.510547 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-config\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.512856 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c15feb-fabd-4063-bee9-cd3b28e64eb0-serving-cert\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.527397 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l65km\" (UniqueName: \"kubernetes.io/projected/16c15feb-fabd-4063-bee9-cd3b28e64eb0-kube-api-access-l65km\") pod \"controller-manager-784fcdd8f8-p8zt5\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.609048 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81221f2e-9c60-4997-b56a-a109daae77ae-tmp\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.609147 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2tf6n\" (UniqueName: \"kubernetes.io/projected/81221f2e-9c60-4997-b56a-a109daae77ae-kube-api-access-2tf6n\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.609170 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81221f2e-9c60-4997-b56a-a109daae77ae-serving-cert\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.609186 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81221f2e-9c60-4997-b56a-a109daae77ae-config\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.609202 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81221f2e-9c60-4997-b56a-a109daae77ae-client-ca\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.610141 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81221f2e-9c60-4997-b56a-a109daae77ae-client-ca\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.610331 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81221f2e-9c60-4997-b56a-a109daae77ae-tmp\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.611281 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81221f2e-9c60-4997-b56a-a109daae77ae-config\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.614273 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81221f2e-9c60-4997-b56a-a109daae77ae-serving-cert\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.615471 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.631255 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tf6n\" (UniqueName: \"kubernetes.io/projected/81221f2e-9c60-4997-b56a-a109daae77ae-kube-api-access-2tf6n\") pod \"route-controller-manager-6db46bf7d7-q8b4r\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.637758 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.648059 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.725281 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.800570 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.811754 5173 generic.go:358] "Generic (PLEG): container finished" podID="7d43d7f6-269b-42cb-a5c5-ee55ebc08c58" containerID="29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b" exitCode=0 Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.811846 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" event={"ID":"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58","Type":"ContainerDied","Data":"29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b"} Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.811881 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" event={"ID":"7d43d7f6-269b-42cb-a5c5-ee55ebc08c58","Type":"ContainerDied","Data":"b1acaab327f59f114e1c9a7efe8dd33c7d80d7320e7e8515656cab6c93d121ad"} Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.811904 5173 scope.go:117] "RemoveContainer" containerID="29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.812070 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.815781 5173 generic.go:358] "Generic (PLEG): container finished" podID="b60b11a3-6068-4c59-bc81-8bc06ba89d0e" containerID="cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5" exitCode=0 Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.815937 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.815923 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" event={"ID":"b60b11a3-6068-4c59-bc81-8bc06ba89d0e","Type":"ContainerDied","Data":"cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5"} Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.816094 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g" event={"ID":"b60b11a3-6068-4c59-bc81-8bc06ba89d0e","Type":"ContainerDied","Data":"7c4f972db7b02eea985a2b72867f2dd5d4f587dcc8b048d6edcccea646f34b1d"} Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.840277 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.848260 5173 scope.go:117] "RemoveContainer" containerID="29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b" Dec 09 14:16:18 crc kubenswrapper[5173]: E1209 14:16:18.848705 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b\": container with ID starting with 29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b not found: ID does not exist" containerID="29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.848814 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b"} err="failed to get container status \"29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b\": rpc error: code = NotFound desc = could not find container \"29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b\": container with ID starting with 29a0010d6682e87152cc9a8db73dd177bcc655b9d3158d91b336e57ef1c8d60b not found: ID does not exist" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.848915 5173 scope.go:117] "RemoveContainer" containerID="cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.852338 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g"] Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.859700 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656684887c-zgq8g"] Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.864689 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr"] Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.867393 5173 scope.go:117] "RemoveContainer" containerID="cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5" Dec 09 14:16:18 crc kubenswrapper[5173]: E1209 14:16:18.867753 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5\": container with ID starting with cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5 not found: ID does not exist" containerID="cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.867780 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5"} err="failed to get container status \"cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5\": rpc error: code = NotFound desc = could not find container \"cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5\": container with ID starting with cb0dd0290b915936b531f9e3ef84287bb9c3c1670f4cf9760b955ca62db27fc5 not found: ID does not exist" Dec 09 14:16:18 crc kubenswrapper[5173]: I1209 14:16:18.868808 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cf8b4c577-5fsvr"] Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.001726 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.018264 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5"] Dec 09 14:16:19 crc kubenswrapper[5173]: W1209 14:16:19.021132 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c15feb_fabd_4063_bee9_cd3b28e64eb0.slice/crio-59204cb2455de5641ab8365de88e45b322caab08ca2f5a8992b5d4c159487e3a WatchSource:0}: Error finding container 59204cb2455de5641ab8365de88e45b322caab08ca2f5a8992b5d4c159487e3a: Status 404 returned error can't find the container with id 59204cb2455de5641ab8365de88e45b322caab08ca2f5a8992b5d4c159487e3a Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.040160 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.060502 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r"] Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.078567 5173 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.086159 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://1504847409cadb7ba6aff1f485523aa7088fda4b0fb30ee6746a959898516f24" gracePeriod=5 Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.086739 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.086893 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.086884 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.139191 5173 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.174340 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.293569 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.336219 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.348261 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.348692 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.394207 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.413643 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.543007 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.604321 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.622505 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.668633 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.677116 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.718662 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.779763 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.813464 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.823863 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" event={"ID":"16c15feb-fabd-4063-bee9-cd3b28e64eb0","Type":"ContainerStarted","Data":"c70ffacd74153682d72ddd9bb34fc07065ca7951b3f7d336ca2960b919ff1e9e"} Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.823918 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.823934 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" event={"ID":"16c15feb-fabd-4063-bee9-cd3b28e64eb0","Type":"ContainerStarted","Data":"59204cb2455de5641ab8365de88e45b322caab08ca2f5a8992b5d4c159487e3a"} Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.825313 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" event={"ID":"81221f2e-9c60-4997-b56a-a109daae77ae","Type":"ContainerStarted","Data":"6f1bd5a1eb0ba474b092694f8d4b27413341b68715edc08ec56d126bc7a2d835"} Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.825379 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" event={"ID":"81221f2e-9c60-4997-b56a-a109daae77ae","Type":"ContainerStarted","Data":"4e4b4232fee6c061ab6dbd5fd11701f53101e5fe670dc5d61b1f51415761f2f0"} Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.851409 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" podStartSLOduration=2.851333185 podStartE2EDuration="2.851333185s" podCreationTimestamp="2025-12-09 14:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:19.840006333 +0000 UTC m=+262.765288590" watchObservedRunningTime="2025-12-09 14:16:19.851333185 +0000 UTC m=+262.776615432" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.855748 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.878246 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" podStartSLOduration=2.878227381 podStartE2EDuration="2.878227381s" podCreationTimestamp="2025-12-09 14:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:19.87240697 +0000 UTC m=+262.797689237" watchObservedRunningTime="2025-12-09 14:16:19.878227381 +0000 UTC m=+262.803509628" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.882834 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d43d7f6-269b-42cb-a5c5-ee55ebc08c58" path="/var/lib/kubelet/pods/7d43d7f6-269b-42cb-a5c5-ee55ebc08c58/volumes" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.883532 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b60b11a3-6068-4c59-bc81-8bc06ba89d0e" path="/var/lib/kubelet/pods/b60b11a3-6068-4c59-bc81-8bc06ba89d0e/volumes" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.905645 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.911666 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.913085 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:19 crc kubenswrapper[5173]: I1209 14:16:19.913549 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.196524 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.306535 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.359621 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.376964 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.491553 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.499987 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.515499 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.571074 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.572997 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.581468 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.594917 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.656281 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.691615 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.747294 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.810421 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.830111 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.835409 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.880657 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 09 14:16:20 crc kubenswrapper[5173]: I1209 14:16:20.954265 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 09 14:16:21 crc kubenswrapper[5173]: I1209 14:16:21.217326 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 09 14:16:21 crc kubenswrapper[5173]: I1209 14:16:21.232336 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 09 14:16:21 crc kubenswrapper[5173]: I1209 14:16:21.360339 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 09 14:16:21 crc kubenswrapper[5173]: I1209 14:16:21.442492 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 09 14:16:21 crc kubenswrapper[5173]: I1209 14:16:21.898950 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 09 14:16:22 crc kubenswrapper[5173]: I1209 14:16:22.528054 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.660989 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.661319 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694115 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694166 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694218 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694233 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694273 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694285 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694320 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694378 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694486 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694809 5173 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694832 5173 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694842 5173 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.694855 5173 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.705043 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.796169 5173 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.853645 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.853700 5173 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="1504847409cadb7ba6aff1f485523aa7088fda4b0fb30ee6746a959898516f24" exitCode=137 Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.853803 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.853828 5173 scope.go:117] "RemoveContainer" containerID="1504847409cadb7ba6aff1f485523aa7088fda4b0fb30ee6746a959898516f24" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.878436 5173 scope.go:117] "RemoveContainer" containerID="1504847409cadb7ba6aff1f485523aa7088fda4b0fb30ee6746a959898516f24" Dec 09 14:16:24 crc kubenswrapper[5173]: E1209 14:16:24.878898 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1504847409cadb7ba6aff1f485523aa7088fda4b0fb30ee6746a959898516f24\": container with ID starting with 1504847409cadb7ba6aff1f485523aa7088fda4b0fb30ee6746a959898516f24 not found: ID does not exist" containerID="1504847409cadb7ba6aff1f485523aa7088fda4b0fb30ee6746a959898516f24" Dec 09 14:16:24 crc kubenswrapper[5173]: I1209 14:16:24.878939 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1504847409cadb7ba6aff1f485523aa7088fda4b0fb30ee6746a959898516f24"} err="failed to get container status \"1504847409cadb7ba6aff1f485523aa7088fda4b0fb30ee6746a959898516f24\": rpc error: code = NotFound desc = could not find container \"1504847409cadb7ba6aff1f485523aa7088fda4b0fb30ee6746a959898516f24\": container with ID starting with 1504847409cadb7ba6aff1f485523aa7088fda4b0fb30ee6746a959898516f24 not found: ID does not exist" Dec 09 14:16:25 crc kubenswrapper[5173]: I1209 14:16:25.877545 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 09 14:16:25 crc kubenswrapper[5173]: I1209 14:16:25.877800 5173 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 09 14:16:25 crc kubenswrapper[5173]: I1209 14:16:25.889055 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 14:16:25 crc kubenswrapper[5173]: I1209 14:16:25.889113 5173 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="9da1381d-14de-45b4-833f-9525359998bd" Dec 09 14:16:25 crc kubenswrapper[5173]: I1209 14:16:25.893873 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 14:16:25 crc kubenswrapper[5173]: I1209 14:16:25.893922 5173 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="9da1381d-14de-45b4-833f-9525359998bd" Dec 09 14:16:34 crc kubenswrapper[5173]: I1209 14:16:34.921566 5173 generic.go:358] "Generic (PLEG): container finished" podID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" containerID="7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917" exitCode=0 Dec 09 14:16:34 crc kubenswrapper[5173]: I1209 14:16:34.921676 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" event={"ID":"d171fe05-fe49-46fb-9407-bdc1f9272d4b","Type":"ContainerDied","Data":"7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917"} Dec 09 14:16:34 crc kubenswrapper[5173]: I1209 14:16:34.923604 5173 scope.go:117] "RemoveContainer" containerID="7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917" Dec 09 14:16:35 crc kubenswrapper[5173]: I1209 14:16:35.930790 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" event={"ID":"d171fe05-fe49-46fb-9407-bdc1f9272d4b","Type":"ContainerStarted","Data":"d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace"} Dec 09 14:16:35 crc kubenswrapper[5173]: I1209 14:16:35.931472 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:16:35 crc kubenswrapper[5173]: I1209 14:16:35.934317 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:16:36 crc kubenswrapper[5173]: I1209 14:16:36.217849 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 09 14:16:37 crc kubenswrapper[5173]: I1209 14:16:37.505163 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 09 14:16:37 crc kubenswrapper[5173]: I1209 14:16:37.677260 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5"] Dec 09 14:16:37 crc kubenswrapper[5173]: I1209 14:16:37.677578 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" podUID="16c15feb-fabd-4063-bee9-cd3b28e64eb0" containerName="controller-manager" containerID="cri-o://c70ffacd74153682d72ddd9bb34fc07065ca7951b3f7d336ca2960b919ff1e9e" gracePeriod=30 Dec 09 14:16:37 crc kubenswrapper[5173]: I1209 14:16:37.693633 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r"] Dec 09 14:16:37 crc kubenswrapper[5173]: I1209 14:16:37.694016 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" podUID="81221f2e-9c60-4997-b56a-a109daae77ae" containerName="route-controller-manager" containerID="cri-o://6f1bd5a1eb0ba474b092694f8d4b27413341b68715edc08ec56d126bc7a2d835" gracePeriod=30 Dec 09 14:16:37 crc kubenswrapper[5173]: I1209 14:16:37.942905 5173 generic.go:358] "Generic (PLEG): container finished" podID="16c15feb-fabd-4063-bee9-cd3b28e64eb0" containerID="c70ffacd74153682d72ddd9bb34fc07065ca7951b3f7d336ca2960b919ff1e9e" exitCode=0 Dec 09 14:16:37 crc kubenswrapper[5173]: I1209 14:16:37.943017 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" event={"ID":"16c15feb-fabd-4063-bee9-cd3b28e64eb0","Type":"ContainerDied","Data":"c70ffacd74153682d72ddd9bb34fc07065ca7951b3f7d336ca2960b919ff1e9e"} Dec 09 14:16:37 crc kubenswrapper[5173]: I1209 14:16:37.944764 5173 generic.go:358] "Generic (PLEG): container finished" podID="81221f2e-9c60-4997-b56a-a109daae77ae" containerID="6f1bd5a1eb0ba474b092694f8d4b27413341b68715edc08ec56d126bc7a2d835" exitCode=0 Dec 09 14:16:37 crc kubenswrapper[5173]: I1209 14:16:37.944900 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" event={"ID":"81221f2e-9c60-4997-b56a-a109daae77ae","Type":"ContainerDied","Data":"6f1bd5a1eb0ba474b092694f8d4b27413341b68715edc08ec56d126bc7a2d835"} Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.188507 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.228345 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq"] Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.229009 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="81221f2e-9c60-4997-b56a-a109daae77ae" containerName="route-controller-manager" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.229027 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="81221f2e-9c60-4997-b56a-a109daae77ae" containerName="route-controller-manager" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.229046 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.229053 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.229209 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="81221f2e-9c60-4997-b56a-a109daae77ae" containerName="route-controller-manager" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.229228 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.234004 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.243956 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq"] Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.280241 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81221f2e-9c60-4997-b56a-a109daae77ae-serving-cert\") pod \"81221f2e-9c60-4997-b56a-a109daae77ae\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.280315 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81221f2e-9c60-4997-b56a-a109daae77ae-config\") pod \"81221f2e-9c60-4997-b56a-a109daae77ae\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.280346 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81221f2e-9c60-4997-b56a-a109daae77ae-tmp\") pod \"81221f2e-9c60-4997-b56a-a109daae77ae\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.280510 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81221f2e-9c60-4997-b56a-a109daae77ae-client-ca\") pod \"81221f2e-9c60-4997-b56a-a109daae77ae\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.280571 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tf6n\" (UniqueName: \"kubernetes.io/projected/81221f2e-9c60-4997-b56a-a109daae77ae-kube-api-access-2tf6n\") pod \"81221f2e-9c60-4997-b56a-a109daae77ae\" (UID: \"81221f2e-9c60-4997-b56a-a109daae77ae\") " Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.280926 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81221f2e-9c60-4997-b56a-a109daae77ae-tmp" (OuterVolumeSpecName: "tmp") pod "81221f2e-9c60-4997-b56a-a109daae77ae" (UID: "81221f2e-9c60-4997-b56a-a109daae77ae"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.281086 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81221f2e-9c60-4997-b56a-a109daae77ae-client-ca" (OuterVolumeSpecName: "client-ca") pod "81221f2e-9c60-4997-b56a-a109daae77ae" (UID: "81221f2e-9c60-4997-b56a-a109daae77ae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.281216 5173 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81221f2e-9c60-4997-b56a-a109daae77ae-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.281342 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81221f2e-9c60-4997-b56a-a109daae77ae-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.281218 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81221f2e-9c60-4997-b56a-a109daae77ae-config" (OuterVolumeSpecName: "config") pod "81221f2e-9c60-4997-b56a-a109daae77ae" (UID: "81221f2e-9c60-4997-b56a-a109daae77ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.286632 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81221f2e-9c60-4997-b56a-a109daae77ae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "81221f2e-9c60-4997-b56a-a109daae77ae" (UID: "81221f2e-9c60-4997-b56a-a109daae77ae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.292218 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81221f2e-9c60-4997-b56a-a109daae77ae-kube-api-access-2tf6n" (OuterVolumeSpecName: "kube-api-access-2tf6n") pod "81221f2e-9c60-4997-b56a-a109daae77ae" (UID: "81221f2e-9c60-4997-b56a-a109daae77ae"). InnerVolumeSpecName "kube-api-access-2tf6n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.326082 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.382110 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6da76edd-6684-464c-a830-62a0a1d0af89-client-ca\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.382185 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p76wq\" (UniqueName: \"kubernetes.io/projected/6da76edd-6684-464c-a830-62a0a1d0af89-kube-api-access-p76wq\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.382286 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6da76edd-6684-464c-a830-62a0a1d0af89-serving-cert\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.382411 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6da76edd-6684-464c-a830-62a0a1d0af89-config\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.382467 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6da76edd-6684-464c-a830-62a0a1d0af89-tmp\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.382605 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81221f2e-9c60-4997-b56a-a109daae77ae-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.382625 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81221f2e-9c60-4997-b56a-a109daae77ae-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.382638 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2tf6n\" (UniqueName: \"kubernetes.io/projected/81221f2e-9c60-4997-b56a-a109daae77ae-kube-api-access-2tf6n\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.400705 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d9b56d489-bjwnb"] Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.401472 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16c15feb-fabd-4063-bee9-cd3b28e64eb0" containerName="controller-manager" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.401494 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c15feb-fabd-4063-bee9-cd3b28e64eb0" containerName="controller-manager" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.401613 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="16c15feb-fabd-4063-bee9-cd3b28e64eb0" containerName="controller-manager" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.409603 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.420478 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b56d489-bjwnb"] Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.483273 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c15feb-fabd-4063-bee9-cd3b28e64eb0-serving-cert\") pod \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.483376 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16c15feb-fabd-4063-bee9-cd3b28e64eb0-tmp\") pod \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.483449 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-config\") pod \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.483488 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l65km\" (UniqueName: \"kubernetes.io/projected/16c15feb-fabd-4063-bee9-cd3b28e64eb0-kube-api-access-l65km\") pod \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.483519 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-client-ca\") pod \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.483734 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-proxy-ca-bundles\") pod \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\" (UID: \"16c15feb-fabd-4063-bee9-cd3b28e64eb0\") " Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.484211 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6da76edd-6684-464c-a830-62a0a1d0af89-client-ca\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.484369 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p76wq\" (UniqueName: \"kubernetes.io/projected/6da76edd-6684-464c-a830-62a0a1d0af89-kube-api-access-p76wq\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.484451 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6da76edd-6684-464c-a830-62a0a1d0af89-serving-cert\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.484501 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6da76edd-6684-464c-a830-62a0a1d0af89-config\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.484539 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6da76edd-6684-464c-a830-62a0a1d0af89-tmp\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.485072 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6da76edd-6684-464c-a830-62a0a1d0af89-tmp\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.485285 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6da76edd-6684-464c-a830-62a0a1d0af89-client-ca\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.485366 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c15feb-fabd-4063-bee9-cd3b28e64eb0-tmp" (OuterVolumeSpecName: "tmp") pod "16c15feb-fabd-4063-bee9-cd3b28e64eb0" (UID: "16c15feb-fabd-4063-bee9-cd3b28e64eb0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.485636 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-client-ca" (OuterVolumeSpecName: "client-ca") pod "16c15feb-fabd-4063-bee9-cd3b28e64eb0" (UID: "16c15feb-fabd-4063-bee9-cd3b28e64eb0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.485724 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-config" (OuterVolumeSpecName: "config") pod "16c15feb-fabd-4063-bee9-cd3b28e64eb0" (UID: "16c15feb-fabd-4063-bee9-cd3b28e64eb0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.485652 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6da76edd-6684-464c-a830-62a0a1d0af89-config\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.486823 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "16c15feb-fabd-4063-bee9-cd3b28e64eb0" (UID: "16c15feb-fabd-4063-bee9-cd3b28e64eb0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.488985 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c15feb-fabd-4063-bee9-cd3b28e64eb0-kube-api-access-l65km" (OuterVolumeSpecName: "kube-api-access-l65km") pod "16c15feb-fabd-4063-bee9-cd3b28e64eb0" (UID: "16c15feb-fabd-4063-bee9-cd3b28e64eb0"). InnerVolumeSpecName "kube-api-access-l65km". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.489724 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6da76edd-6684-464c-a830-62a0a1d0af89-serving-cert\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.492133 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c15feb-fabd-4063-bee9-cd3b28e64eb0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16c15feb-fabd-4063-bee9-cd3b28e64eb0" (UID: "16c15feb-fabd-4063-bee9-cd3b28e64eb0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.502888 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p76wq\" (UniqueName: \"kubernetes.io/projected/6da76edd-6684-464c-a830-62a0a1d0af89-kube-api-access-p76wq\") pod \"route-controller-manager-56656b5cf5-cz8dq\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.550913 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.585675 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-proxy-ca-bundles\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.585743 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-client-ca\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.585802 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6006933d-fb45-456d-a200-2031414b5271-serving-cert\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.585818 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6006933d-fb45-456d-a200-2031414b5271-tmp\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.585841 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-config\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.585870 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf5hw\" (UniqueName: \"kubernetes.io/projected/6006933d-fb45-456d-a200-2031414b5271-kube-api-access-qf5hw\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.585912 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/16c15feb-fabd-4063-bee9-cd3b28e64eb0-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.585929 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.585940 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l65km\" (UniqueName: \"kubernetes.io/projected/16c15feb-fabd-4063-bee9-cd3b28e64eb0-kube-api-access-l65km\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.585951 5173 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.585961 5173 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c15feb-fabd-4063-bee9-cd3b28e64eb0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.585970 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c15feb-fabd-4063-bee9-cd3b28e64eb0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.686622 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qf5hw\" (UniqueName: \"kubernetes.io/projected/6006933d-fb45-456d-a200-2031414b5271-kube-api-access-qf5hw\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.687311 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-proxy-ca-bundles\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.687396 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-client-ca\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.687471 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6006933d-fb45-456d-a200-2031414b5271-serving-cert\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.687488 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6006933d-fb45-456d-a200-2031414b5271-tmp\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.687512 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-config\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.688580 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6006933d-fb45-456d-a200-2031414b5271-tmp\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.688626 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-proxy-ca-bundles\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.689386 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-client-ca\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.689398 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-config\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.692801 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6006933d-fb45-456d-a200-2031414b5271-serving-cert\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.703975 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf5hw\" (UniqueName: \"kubernetes.io/projected/6006933d-fb45-456d-a200-2031414b5271-kube-api-access-qf5hw\") pod \"controller-manager-5d9b56d489-bjwnb\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.727648 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.938850 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq"] Dec 09 14:16:38 crc kubenswrapper[5173]: W1209 14:16:38.956858 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6da76edd_6684_464c_a830_62a0a1d0af89.slice/crio-b9fd43367ec1dfb183cb8bca5830f23c50be336bc860a2556f89006ae0874cdb WatchSource:0}: Error finding container b9fd43367ec1dfb183cb8bca5830f23c50be336bc860a2556f89006ae0874cdb: Status 404 returned error can't find the container with id b9fd43367ec1dfb183cb8bca5830f23c50be336bc860a2556f89006ae0874cdb Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.960067 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.961385 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r" event={"ID":"81221f2e-9c60-4997-b56a-a109daae77ae","Type":"ContainerDied","Data":"4e4b4232fee6c061ab6dbd5fd11701f53101e5fe670dc5d61b1f51415761f2f0"} Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.961430 5173 scope.go:117] "RemoveContainer" containerID="6f1bd5a1eb0ba474b092694f8d4b27413341b68715edc08ec56d126bc7a2d835" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.968062 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" event={"ID":"16c15feb-fabd-4063-bee9-cd3b28e64eb0","Type":"ContainerDied","Data":"59204cb2455de5641ab8365de88e45b322caab08ca2f5a8992b5d4c159487e3a"} Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.968347 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.993492 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r"] Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.999412 5173 scope.go:117] "RemoveContainer" containerID="c70ffacd74153682d72ddd9bb34fc07065ca7951b3f7d336ca2960b919ff1e9e" Dec 09 14:16:38 crc kubenswrapper[5173]: I1209 14:16:38.999539 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8b4r"] Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.003481 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5"] Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.006930 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-784fcdd8f8-p8zt5"] Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.103314 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b56d489-bjwnb"] Dec 09 14:16:39 crc kubenswrapper[5173]: W1209 14:16:39.109346 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6006933d_fb45_456d_a200_2031414b5271.slice/crio-c84eaa7eaed16a94ddd20026170dc919cf508568dfc6803ce25c80846f61a2fd WatchSource:0}: Error finding container c84eaa7eaed16a94ddd20026170dc919cf508568dfc6803ce25c80846f61a2fd: Status 404 returned error can't find the container with id c84eaa7eaed16a94ddd20026170dc919cf508568dfc6803ce25c80846f61a2fd Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.702064 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.879874 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c15feb-fabd-4063-bee9-cd3b28e64eb0" path="/var/lib/kubelet/pods/16c15feb-fabd-4063-bee9-cd3b28e64eb0/volumes" Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.880801 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81221f2e-9c60-4997-b56a-a109daae77ae" path="/var/lib/kubelet/pods/81221f2e-9c60-4997-b56a-a109daae77ae/volumes" Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.947311 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.974625 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" event={"ID":"6da76edd-6684-464c-a830-62a0a1d0af89","Type":"ContainerStarted","Data":"070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b"} Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.974880 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" event={"ID":"6da76edd-6684-464c-a830-62a0a1d0af89","Type":"ContainerStarted","Data":"b9fd43367ec1dfb183cb8bca5830f23c50be336bc860a2556f89006ae0874cdb"} Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.974959 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.977080 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" event={"ID":"6006933d-fb45-456d-a200-2031414b5271","Type":"ContainerStarted","Data":"1f89db58c4a378dfa3c3f560201683f2eb84ae9242be78c7c12537a3b7edcd45"} Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.977123 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.977139 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" event={"ID":"6006933d-fb45-456d-a200-2031414b5271","Type":"ContainerStarted","Data":"c84eaa7eaed16a94ddd20026170dc919cf508568dfc6803ce25c80846f61a2fd"} Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.983740 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:16:39 crc kubenswrapper[5173]: I1209 14:16:39.988569 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" podStartSLOduration=2.9885551980000002 podStartE2EDuration="2.988555198s" podCreationTimestamp="2025-12-09 14:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:39.986489314 +0000 UTC m=+282.911771571" watchObservedRunningTime="2025-12-09 14:16:39.988555198 +0000 UTC m=+282.913837445" Dec 09 14:16:40 crc kubenswrapper[5173]: I1209 14:16:40.025299 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" podStartSLOduration=3.025279269 podStartE2EDuration="3.025279269s" podCreationTimestamp="2025-12-09 14:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:16:40.011025406 +0000 UTC m=+282.936307663" watchObservedRunningTime="2025-12-09 14:16:40.025279269 +0000 UTC m=+282.950561516" Dec 09 14:16:40 crc kubenswrapper[5173]: I1209 14:16:40.251786 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 09 14:16:40 crc kubenswrapper[5173]: I1209 14:16:40.398593 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:16:43 crc kubenswrapper[5173]: I1209 14:16:43.978892 5173 ???:1] "http: TLS handshake error from 192.168.126.11:46656: no serving certificate available for the kubelet" Dec 09 14:16:46 crc kubenswrapper[5173]: I1209 14:16:46.631878 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 09 14:16:48 crc kubenswrapper[5173]: I1209 14:16:48.302841 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 09 14:16:49 crc kubenswrapper[5173]: I1209 14:16:49.085375 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:16:49 crc kubenswrapper[5173]: I1209 14:16:49.085453 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:16:49 crc kubenswrapper[5173]: I1209 14:16:49.085506 5173 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:16:49 crc kubenswrapper[5173]: I1209 14:16:49.085964 5173 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e585a8663ff5e2821ef163759a8486a08d59824ba49fa41e0d15200765ef763"} pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:16:49 crc kubenswrapper[5173]: I1209 14:16:49.086029 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" containerID="cri-o://7e585a8663ff5e2821ef163759a8486a08d59824ba49fa41e0d15200765ef763" gracePeriod=600 Dec 09 14:16:50 crc kubenswrapper[5173]: I1209 14:16:50.030041 5173 generic.go:358] "Generic (PLEG): container finished" podID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerID="7e585a8663ff5e2821ef163759a8486a08d59824ba49fa41e0d15200765ef763" exitCode=0 Dec 09 14:16:50 crc kubenswrapper[5173]: I1209 14:16:50.030128 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerDied","Data":"7e585a8663ff5e2821ef163759a8486a08d59824ba49fa41e0d15200765ef763"} Dec 09 14:16:50 crc kubenswrapper[5173]: I1209 14:16:50.030575 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerStarted","Data":"4ad6833f3ca6b5e4f4c17ba91f6d8096243861b6d149a86087b4c5cd6377d00d"} Dec 09 14:16:51 crc kubenswrapper[5173]: I1209 14:16:51.700821 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 09 14:16:52 crc kubenswrapper[5173]: I1209 14:16:52.105121 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:16:52 crc kubenswrapper[5173]: I1209 14:16:52.836615 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 09 14:16:55 crc kubenswrapper[5173]: I1209 14:16:55.500685 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 09 14:16:57 crc kubenswrapper[5173]: I1209 14:16:57.816312 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 09 14:16:58 crc kubenswrapper[5173]: I1209 14:16:58.335133 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:16:58 crc kubenswrapper[5173]: I1209 14:16:58.341224 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:16:59 crc kubenswrapper[5173]: I1209 14:16:59.432908 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 09 14:17:00 crc kubenswrapper[5173]: I1209 14:17:00.314069 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 09 14:17:01 crc kubenswrapper[5173]: I1209 14:17:01.524626 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 09 14:17:03 crc kubenswrapper[5173]: I1209 14:17:03.030896 5173 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 14:17:17 crc kubenswrapper[5173]: I1209 14:17:17.702244 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b56d489-bjwnb"] Dec 09 14:17:17 crc kubenswrapper[5173]: I1209 14:17:17.703107 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" podUID="6006933d-fb45-456d-a200-2031414b5271" containerName="controller-manager" containerID="cri-o://1f89db58c4a378dfa3c3f560201683f2eb84ae9242be78c7c12537a3b7edcd45" gracePeriod=30 Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.535588 5173 generic.go:358] "Generic (PLEG): container finished" podID="6006933d-fb45-456d-a200-2031414b5271" containerID="1f89db58c4a378dfa3c3f560201683f2eb84ae9242be78c7c12537a3b7edcd45" exitCode=0 Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.535697 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" event={"ID":"6006933d-fb45-456d-a200-2031414b5271","Type":"ContainerDied","Data":"1f89db58c4a378dfa3c3f560201683f2eb84ae9242be78c7c12537a3b7edcd45"} Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.756117 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.784463 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9"] Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.784966 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6006933d-fb45-456d-a200-2031414b5271" containerName="controller-manager" Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.784985 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="6006933d-fb45-456d-a200-2031414b5271" containerName="controller-manager" Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.785084 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="6006933d-fb45-456d-a200-2031414b5271" containerName="controller-manager" Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.902292 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-proxy-ca-bundles\") pod \"6006933d-fb45-456d-a200-2031414b5271\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.902451 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf5hw\" (UniqueName: \"kubernetes.io/projected/6006933d-fb45-456d-a200-2031414b5271-kube-api-access-qf5hw\") pod \"6006933d-fb45-456d-a200-2031414b5271\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.902559 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-client-ca\") pod \"6006933d-fb45-456d-a200-2031414b5271\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.902848 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6006933d-fb45-456d-a200-2031414b5271-tmp\") pod \"6006933d-fb45-456d-a200-2031414b5271\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.902893 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6006933d-fb45-456d-a200-2031414b5271-serving-cert\") pod \"6006933d-fb45-456d-a200-2031414b5271\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.902931 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-config\") pod \"6006933d-fb45-456d-a200-2031414b5271\" (UID: \"6006933d-fb45-456d-a200-2031414b5271\") " Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.903097 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6006933d-fb45-456d-a200-2031414b5271" (UID: "6006933d-fb45-456d-a200-2031414b5271"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.903519 5173 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.903638 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6006933d-fb45-456d-a200-2031414b5271-tmp" (OuterVolumeSpecName: "tmp") pod "6006933d-fb45-456d-a200-2031414b5271" (UID: "6006933d-fb45-456d-a200-2031414b5271"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.903724 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-client-ca" (OuterVolumeSpecName: "client-ca") pod "6006933d-fb45-456d-a200-2031414b5271" (UID: "6006933d-fb45-456d-a200-2031414b5271"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.903914 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-config" (OuterVolumeSpecName: "config") pod "6006933d-fb45-456d-a200-2031414b5271" (UID: "6006933d-fb45-456d-a200-2031414b5271"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.909849 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6006933d-fb45-456d-a200-2031414b5271-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6006933d-fb45-456d-a200-2031414b5271" (UID: "6006933d-fb45-456d-a200-2031414b5271"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.911035 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6006933d-fb45-456d-a200-2031414b5271-kube-api-access-qf5hw" (OuterVolumeSpecName: "kube-api-access-qf5hw") pod "6006933d-fb45-456d-a200-2031414b5271" (UID: "6006933d-fb45-456d-a200-2031414b5271"). InnerVolumeSpecName "kube-api-access-qf5hw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.924204 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9"] Dec 09 14:17:18 crc kubenswrapper[5173]: I1209 14:17:18.924424 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.004572 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94f24951-5e13-419f-86c8-6f82ea6bdd01-proxy-ca-bundles\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.004638 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f24951-5e13-419f-86c8-6f82ea6bdd01-client-ca\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.004669 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94f24951-5e13-419f-86c8-6f82ea6bdd01-tmp\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.004851 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftlhq\" (UniqueName: \"kubernetes.io/projected/94f24951-5e13-419f-86c8-6f82ea6bdd01-kube-api-access-ftlhq\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.004927 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f24951-5e13-419f-86c8-6f82ea6bdd01-serving-cert\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.004965 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f24951-5e13-419f-86c8-6f82ea6bdd01-config\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.005081 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6006933d-fb45-456d-a200-2031414b5271-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.005093 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6006933d-fb45-456d-a200-2031414b5271-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.005106 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.005116 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qf5hw\" (UniqueName: \"kubernetes.io/projected/6006933d-fb45-456d-a200-2031414b5271-kube-api-access-qf5hw\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.005124 5173 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6006933d-fb45-456d-a200-2031414b5271-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.106994 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ftlhq\" (UniqueName: \"kubernetes.io/projected/94f24951-5e13-419f-86c8-6f82ea6bdd01-kube-api-access-ftlhq\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.107076 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f24951-5e13-419f-86c8-6f82ea6bdd01-serving-cert\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.107114 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f24951-5e13-419f-86c8-6f82ea6bdd01-config\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.107187 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94f24951-5e13-419f-86c8-6f82ea6bdd01-proxy-ca-bundles\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.107216 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f24951-5e13-419f-86c8-6f82ea6bdd01-client-ca\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.107244 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94f24951-5e13-419f-86c8-6f82ea6bdd01-tmp\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.108023 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/94f24951-5e13-419f-86c8-6f82ea6bdd01-tmp\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.108578 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f24951-5e13-419f-86c8-6f82ea6bdd01-client-ca\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.108818 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f24951-5e13-419f-86c8-6f82ea6bdd01-config\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.108878 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94f24951-5e13-419f-86c8-6f82ea6bdd01-proxy-ca-bundles\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.113985 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f24951-5e13-419f-86c8-6f82ea6bdd01-serving-cert\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.127949 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftlhq\" (UniqueName: \"kubernetes.io/projected/94f24951-5e13-419f-86c8-6f82ea6bdd01-kube-api-access-ftlhq\") pod \"controller-manager-784fcdd8f8-4cxq9\" (UID: \"94f24951-5e13-419f-86c8-6f82ea6bdd01\") " pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.252680 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.462390 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9"] Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.469377 5173 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.543287 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" event={"ID":"94f24951-5e13-419f-86c8-6f82ea6bdd01","Type":"ContainerStarted","Data":"77e3ddb746984a54698a466d3477be2c51a8c61556030f4180bc403aee28e552"} Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.545467 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.545495 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d9b56d489-bjwnb" event={"ID":"6006933d-fb45-456d-a200-2031414b5271","Type":"ContainerDied","Data":"c84eaa7eaed16a94ddd20026170dc919cf508568dfc6803ce25c80846f61a2fd"} Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.545536 5173 scope.go:117] "RemoveContainer" containerID="1f89db58c4a378dfa3c3f560201683f2eb84ae9242be78c7c12537a3b7edcd45" Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.577334 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b56d489-bjwnb"] Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.581136 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b56d489-bjwnb"] Dec 09 14:17:19 crc kubenswrapper[5173]: I1209 14:17:19.877504 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6006933d-fb45-456d-a200-2031414b5271" path="/var/lib/kubelet/pods/6006933d-fb45-456d-a200-2031414b5271/volumes" Dec 09 14:17:20 crc kubenswrapper[5173]: I1209 14:17:20.555329 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" event={"ID":"94f24951-5e13-419f-86c8-6f82ea6bdd01","Type":"ContainerStarted","Data":"a940e35c190236ed83ff07df9147fadfd294901f7751cdadb8b7be959f27da2d"} Dec 09 14:17:20 crc kubenswrapper[5173]: I1209 14:17:20.555809 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:20 crc kubenswrapper[5173]: I1209 14:17:20.563373 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" Dec 09 14:17:20 crc kubenswrapper[5173]: I1209 14:17:20.571060 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-784fcdd8f8-4cxq9" podStartSLOduration=3.571044861 podStartE2EDuration="3.571044861s" podCreationTimestamp="2025-12-09 14:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:17:20.570666419 +0000 UTC m=+323.495948676" watchObservedRunningTime="2025-12-09 14:17:20.571044861 +0000 UTC m=+323.496327108" Dec 09 14:17:57 crc kubenswrapper[5173]: I1209 14:17:57.686944 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq"] Dec 09 14:17:57 crc kubenswrapper[5173]: I1209 14:17:57.687718 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" podUID="6da76edd-6684-464c-a830-62a0a1d0af89" containerName="route-controller-manager" containerID="cri-o://070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b" gracePeriod=30 Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.082675 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.118273 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8"] Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.119206 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6da76edd-6684-464c-a830-62a0a1d0af89" containerName="route-controller-manager" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.119233 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da76edd-6684-464c-a830-62a0a1d0af89" containerName="route-controller-manager" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.119344 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="6da76edd-6684-464c-a830-62a0a1d0af89" containerName="route-controller-manager" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.133317 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8"] Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.133638 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.208911 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6da76edd-6684-464c-a830-62a0a1d0af89-serving-cert\") pod \"6da76edd-6684-464c-a830-62a0a1d0af89\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.208974 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6da76edd-6684-464c-a830-62a0a1d0af89-config\") pod \"6da76edd-6684-464c-a830-62a0a1d0af89\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.209037 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6da76edd-6684-464c-a830-62a0a1d0af89-tmp\") pod \"6da76edd-6684-464c-a830-62a0a1d0af89\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.209107 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6da76edd-6684-464c-a830-62a0a1d0af89-client-ca\") pod \"6da76edd-6684-464c-a830-62a0a1d0af89\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.209154 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p76wq\" (UniqueName: \"kubernetes.io/projected/6da76edd-6684-464c-a830-62a0a1d0af89-kube-api-access-p76wq\") pod \"6da76edd-6684-464c-a830-62a0a1d0af89\" (UID: \"6da76edd-6684-464c-a830-62a0a1d0af89\") " Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.209869 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6da76edd-6684-464c-a830-62a0a1d0af89-tmp" (OuterVolumeSpecName: "tmp") pod "6da76edd-6684-464c-a830-62a0a1d0af89" (UID: "6da76edd-6684-464c-a830-62a0a1d0af89"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.210056 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6da76edd-6684-464c-a830-62a0a1d0af89-client-ca" (OuterVolumeSpecName: "client-ca") pod "6da76edd-6684-464c-a830-62a0a1d0af89" (UID: "6da76edd-6684-464c-a830-62a0a1d0af89"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.210226 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6da76edd-6684-464c-a830-62a0a1d0af89-config" (OuterVolumeSpecName: "config") pod "6da76edd-6684-464c-a830-62a0a1d0af89" (UID: "6da76edd-6684-464c-a830-62a0a1d0af89"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.214194 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da76edd-6684-464c-a830-62a0a1d0af89-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6da76edd-6684-464c-a830-62a0a1d0af89" (UID: "6da76edd-6684-464c-a830-62a0a1d0af89"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.214312 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da76edd-6684-464c-a830-62a0a1d0af89-kube-api-access-p76wq" (OuterVolumeSpecName: "kube-api-access-p76wq") pod "6da76edd-6684-464c-a830-62a0a1d0af89" (UID: "6da76edd-6684-464c-a830-62a0a1d0af89"). InnerVolumeSpecName "kube-api-access-p76wq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.309898 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fa51e870-567a-49ad-8fc5-031a12bf559c-tmp\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.310166 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa51e870-567a-49ad-8fc5-031a12bf559c-serving-cert\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.310260 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbjq2\" (UniqueName: \"kubernetes.io/projected/fa51e870-567a-49ad-8fc5-031a12bf559c-kube-api-access-rbjq2\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.310446 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa51e870-567a-49ad-8fc5-031a12bf559c-client-ca\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.310559 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa51e870-567a-49ad-8fc5-031a12bf559c-config\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.310661 5173 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6da76edd-6684-464c-a830-62a0a1d0af89-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.310726 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6da76edd-6684-464c-a830-62a0a1d0af89-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.310790 5173 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6da76edd-6684-464c-a830-62a0a1d0af89-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.310854 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p76wq\" (UniqueName: \"kubernetes.io/projected/6da76edd-6684-464c-a830-62a0a1d0af89-kube-api-access-p76wq\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.310919 5173 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6da76edd-6684-464c-a830-62a0a1d0af89-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.412105 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa51e870-567a-49ad-8fc5-031a12bf559c-serving-cert\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.412204 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rbjq2\" (UniqueName: \"kubernetes.io/projected/fa51e870-567a-49ad-8fc5-031a12bf559c-kube-api-access-rbjq2\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.412321 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa51e870-567a-49ad-8fc5-031a12bf559c-client-ca\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.412455 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa51e870-567a-49ad-8fc5-031a12bf559c-config\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.412539 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fa51e870-567a-49ad-8fc5-031a12bf559c-tmp\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.413379 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa51e870-567a-49ad-8fc5-031a12bf559c-client-ca\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.413947 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fa51e870-567a-49ad-8fc5-031a12bf559c-tmp\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.414141 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa51e870-567a-49ad-8fc5-031a12bf559c-config\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.417910 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa51e870-567a-49ad-8fc5-031a12bf559c-serving-cert\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.431724 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbjq2\" (UniqueName: \"kubernetes.io/projected/fa51e870-567a-49ad-8fc5-031a12bf559c-kube-api-access-rbjq2\") pod \"route-controller-manager-6db46bf7d7-q8kn8\" (UID: \"fa51e870-567a-49ad-8fc5-031a12bf559c\") " pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.452529 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.773935 5173 generic.go:358] "Generic (PLEG): container finished" podID="6da76edd-6684-464c-a830-62a0a1d0af89" containerID="070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b" exitCode=0 Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.774005 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.774006 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" event={"ID":"6da76edd-6684-464c-a830-62a0a1d0af89","Type":"ContainerDied","Data":"070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b"} Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.774185 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq" event={"ID":"6da76edd-6684-464c-a830-62a0a1d0af89","Type":"ContainerDied","Data":"b9fd43367ec1dfb183cb8bca5830f23c50be336bc860a2556f89006ae0874cdb"} Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.774203 5173 scope.go:117] "RemoveContainer" containerID="070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.801962 5173 scope.go:117] "RemoveContainer" containerID="070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b" Dec 09 14:17:58 crc kubenswrapper[5173]: E1209 14:17:58.802430 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b\": container with ID starting with 070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b not found: ID does not exist" containerID="070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.802460 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b"} err="failed to get container status \"070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b\": rpc error: code = NotFound desc = could not find container \"070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b\": container with ID starting with 070a5bf2670c0c50accb554d3bed16feb7f1dcf40d70810243a7b11974064f6b not found: ID does not exist" Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.807098 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq"] Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.810574 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56656b5cf5-cz8dq"] Dec 09 14:17:58 crc kubenswrapper[5173]: I1209 14:17:58.889974 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8"] Dec 09 14:17:59 crc kubenswrapper[5173]: I1209 14:17:59.783119 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" event={"ID":"fa51e870-567a-49ad-8fc5-031a12bf559c","Type":"ContainerStarted","Data":"d4ac3e15d6e28a050253a4775ee05a1e6dcfc209e37f0623375a2650a3e11612"} Dec 09 14:17:59 crc kubenswrapper[5173]: I1209 14:17:59.783423 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" event={"ID":"fa51e870-567a-49ad-8fc5-031a12bf559c","Type":"ContainerStarted","Data":"6396d5112aaf81ee75b8833a9706903c2317c3c34f2ee8e08150b67b4bc668a1"} Dec 09 14:17:59 crc kubenswrapper[5173]: I1209 14:17:59.783544 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:17:59 crc kubenswrapper[5173]: I1209 14:17:59.806044 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" podStartSLOduration=2.806019066 podStartE2EDuration="2.806019066s" podCreationTimestamp="2025-12-09 14:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:17:59.801435073 +0000 UTC m=+362.726717340" watchObservedRunningTime="2025-12-09 14:17:59.806019066 +0000 UTC m=+362.731301333" Dec 09 14:17:59 crc kubenswrapper[5173]: I1209 14:17:59.878150 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da76edd-6684-464c-a830-62a0a1d0af89" path="/var/lib/kubelet/pods/6da76edd-6684-464c-a830-62a0a1d0af89/volumes" Dec 09 14:18:00 crc kubenswrapper[5173]: I1209 14:18:00.658212 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6db46bf7d7-q8kn8" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.746204 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-95c8n"] Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.747129 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-95c8n" podUID="8536effa-529d-4962-ab4e-0d8e1c3c4d93" containerName="registry-server" containerID="cri-o://c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480" gracePeriod=30 Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.758627 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mq8bj"] Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.759120 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mq8bj" podUID="a79afc8b-ca22-4e56-b7a9-d725b23e30ff" containerName="registry-server" containerID="cri-o://e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc" gracePeriod=30 Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.777652 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z9d5g"] Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.778016 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" podUID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" containerName="marketplace-operator" containerID="cri-o://d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace" gracePeriod=30 Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.785278 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9cgv7"] Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.798508 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-72sct"] Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.798554 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xmw7h"] Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.798874 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xmw7h" podUID="07be13ae-949a-42e1-9366-afe32b5480f2" containerName="registry-server" containerID="cri-o://253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2" gracePeriod=30 Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.799061 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.799839 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-72sct" podUID="ae976069-cbe3-4195-8666-ec1e96e284e9" containerName="registry-server" containerID="cri-o://2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce" gracePeriod=30 Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.811438 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9cgv7"] Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.871858 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45ab35d5-cdaf-43c4-abce-86d212e08388-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9cgv7\" (UID: \"45ab35d5-cdaf-43c4-abce-86d212e08388\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.871902 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45ab35d5-cdaf-43c4-abce-86d212e08388-tmp\") pod \"marketplace-operator-547dbd544d-9cgv7\" (UID: \"45ab35d5-cdaf-43c4-abce-86d212e08388\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.871922 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/45ab35d5-cdaf-43c4-abce-86d212e08388-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9cgv7\" (UID: \"45ab35d5-cdaf-43c4-abce-86d212e08388\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.871977 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjv2l\" (UniqueName: \"kubernetes.io/projected/45ab35d5-cdaf-43c4-abce-86d212e08388-kube-api-access-sjv2l\") pod \"marketplace-operator-547dbd544d-9cgv7\" (UID: \"45ab35d5-cdaf-43c4-abce-86d212e08388\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.974895 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sjv2l\" (UniqueName: \"kubernetes.io/projected/45ab35d5-cdaf-43c4-abce-86d212e08388-kube-api-access-sjv2l\") pod \"marketplace-operator-547dbd544d-9cgv7\" (UID: \"45ab35d5-cdaf-43c4-abce-86d212e08388\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.974976 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45ab35d5-cdaf-43c4-abce-86d212e08388-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9cgv7\" (UID: \"45ab35d5-cdaf-43c4-abce-86d212e08388\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.974996 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45ab35d5-cdaf-43c4-abce-86d212e08388-tmp\") pod \"marketplace-operator-547dbd544d-9cgv7\" (UID: \"45ab35d5-cdaf-43c4-abce-86d212e08388\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.975017 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/45ab35d5-cdaf-43c4-abce-86d212e08388-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9cgv7\" (UID: \"45ab35d5-cdaf-43c4-abce-86d212e08388\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.976222 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45ab35d5-cdaf-43c4-abce-86d212e08388-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9cgv7\" (UID: \"45ab35d5-cdaf-43c4-abce-86d212e08388\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.976794 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45ab35d5-cdaf-43c4-abce-86d212e08388-tmp\") pod \"marketplace-operator-547dbd544d-9cgv7\" (UID: \"45ab35d5-cdaf-43c4-abce-86d212e08388\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.985283 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/45ab35d5-cdaf-43c4-abce-86d212e08388-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9cgv7\" (UID: \"45ab35d5-cdaf-43c4-abce-86d212e08388\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:09 crc kubenswrapper[5173]: I1209 14:18:09.995173 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjv2l\" (UniqueName: \"kubernetes.io/projected/45ab35d5-cdaf-43c4-abce-86d212e08388-kube-api-access-sjv2l\") pod \"marketplace-operator-547dbd544d-9cgv7\" (UID: \"45ab35d5-cdaf-43c4-abce-86d212e08388\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.186997 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.206049 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.241540 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.259461 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.280334 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8536effa-529d-4962-ab4e-0d8e1c3c4d93-catalog-content\") pod \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\" (UID: \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.280404 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn2kq\" (UniqueName: \"kubernetes.io/projected/8536effa-529d-4962-ab4e-0d8e1c3c4d93-kube-api-access-hn2kq\") pod \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\" (UID: \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.280472 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8536effa-529d-4962-ab4e-0d8e1c3c4d93-utilities\") pod \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\" (UID: \"8536effa-529d-4962-ab4e-0d8e1c3c4d93\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.282340 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8536effa-529d-4962-ab4e-0d8e1c3c4d93-utilities" (OuterVolumeSpecName: "utilities") pod "8536effa-529d-4962-ab4e-0d8e1c3c4d93" (UID: "8536effa-529d-4962-ab4e-0d8e1c3c4d93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.285923 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8536effa-529d-4962-ab4e-0d8e1c3c4d93-kube-api-access-hn2kq" (OuterVolumeSpecName: "kube-api-access-hn2kq") pod "8536effa-529d-4962-ab4e-0d8e1c3c4d93" (UID: "8536effa-529d-4962-ab4e-0d8e1c3c4d93"). InnerVolumeSpecName "kube-api-access-hn2kq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.294695 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.317420 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.342078 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8536effa-529d-4962-ab4e-0d8e1c3c4d93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8536effa-529d-4962-ab4e-0d8e1c3c4d93" (UID: "8536effa-529d-4962-ab4e-0d8e1c3c4d93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381173 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae976069-cbe3-4195-8666-ec1e96e284e9-catalog-content\") pod \"ae976069-cbe3-4195-8666-ec1e96e284e9\" (UID: \"ae976069-cbe3-4195-8666-ec1e96e284e9\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381226 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d171fe05-fe49-46fb-9407-bdc1f9272d4b-marketplace-trusted-ca\") pod \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381274 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae976069-cbe3-4195-8666-ec1e96e284e9-utilities\") pod \"ae976069-cbe3-4195-8666-ec1e96e284e9\" (UID: \"ae976069-cbe3-4195-8666-ec1e96e284e9\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381307 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkj79\" (UniqueName: \"kubernetes.io/projected/07be13ae-949a-42e1-9366-afe32b5480f2-kube-api-access-vkj79\") pod \"07be13ae-949a-42e1-9366-afe32b5480f2\" (UID: \"07be13ae-949a-42e1-9366-afe32b5480f2\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381327 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-utilities\") pod \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\" (UID: \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381360 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-catalog-content\") pod \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\" (UID: \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381391 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf76p\" (UniqueName: \"kubernetes.io/projected/d171fe05-fe49-46fb-9407-bdc1f9272d4b-kube-api-access-zf76p\") pod \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381430 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5qms\" (UniqueName: \"kubernetes.io/projected/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-kube-api-access-f5qms\") pod \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\" (UID: \"a79afc8b-ca22-4e56-b7a9-d725b23e30ff\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381451 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d171fe05-fe49-46fb-9407-bdc1f9272d4b-tmp\") pod \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381476 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d171fe05-fe49-46fb-9407-bdc1f9272d4b-marketplace-operator-metrics\") pod \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\" (UID: \"d171fe05-fe49-46fb-9407-bdc1f9272d4b\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381519 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2jlf\" (UniqueName: \"kubernetes.io/projected/ae976069-cbe3-4195-8666-ec1e96e284e9-kube-api-access-h2jlf\") pod \"ae976069-cbe3-4195-8666-ec1e96e284e9\" (UID: \"ae976069-cbe3-4195-8666-ec1e96e284e9\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381555 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07be13ae-949a-42e1-9366-afe32b5480f2-utilities\") pod \"07be13ae-949a-42e1-9366-afe32b5480f2\" (UID: \"07be13ae-949a-42e1-9366-afe32b5480f2\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381604 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07be13ae-949a-42e1-9366-afe32b5480f2-catalog-content\") pod \"07be13ae-949a-42e1-9366-afe32b5480f2\" (UID: \"07be13ae-949a-42e1-9366-afe32b5480f2\") " Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381801 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8536effa-529d-4962-ab4e-0d8e1c3c4d93-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381819 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hn2kq\" (UniqueName: \"kubernetes.io/projected/8536effa-529d-4962-ab4e-0d8e1c3c4d93-kube-api-access-hn2kq\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.381829 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8536effa-529d-4962-ab4e-0d8e1c3c4d93-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.382563 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d171fe05-fe49-46fb-9407-bdc1f9272d4b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d171fe05-fe49-46fb-9407-bdc1f9272d4b" (UID: "d171fe05-fe49-46fb-9407-bdc1f9272d4b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.383482 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae976069-cbe3-4195-8666-ec1e96e284e9-utilities" (OuterVolumeSpecName: "utilities") pod "ae976069-cbe3-4195-8666-ec1e96e284e9" (UID: "ae976069-cbe3-4195-8666-ec1e96e284e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.385143 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-utilities" (OuterVolumeSpecName: "utilities") pod "a79afc8b-ca22-4e56-b7a9-d725b23e30ff" (UID: "a79afc8b-ca22-4e56-b7a9-d725b23e30ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.385808 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d171fe05-fe49-46fb-9407-bdc1f9272d4b-tmp" (OuterVolumeSpecName: "tmp") pod "d171fe05-fe49-46fb-9407-bdc1f9272d4b" (UID: "d171fe05-fe49-46fb-9407-bdc1f9272d4b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.386239 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07be13ae-949a-42e1-9366-afe32b5480f2-utilities" (OuterVolumeSpecName: "utilities") pod "07be13ae-949a-42e1-9366-afe32b5480f2" (UID: "07be13ae-949a-42e1-9366-afe32b5480f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.387181 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-kube-api-access-f5qms" (OuterVolumeSpecName: "kube-api-access-f5qms") pod "a79afc8b-ca22-4e56-b7a9-d725b23e30ff" (UID: "a79afc8b-ca22-4e56-b7a9-d725b23e30ff"). InnerVolumeSpecName "kube-api-access-f5qms". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.387534 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d171fe05-fe49-46fb-9407-bdc1f9272d4b-kube-api-access-zf76p" (OuterVolumeSpecName: "kube-api-access-zf76p") pod "d171fe05-fe49-46fb-9407-bdc1f9272d4b" (UID: "d171fe05-fe49-46fb-9407-bdc1f9272d4b"). InnerVolumeSpecName "kube-api-access-zf76p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.387718 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07be13ae-949a-42e1-9366-afe32b5480f2-kube-api-access-vkj79" (OuterVolumeSpecName: "kube-api-access-vkj79") pod "07be13ae-949a-42e1-9366-afe32b5480f2" (UID: "07be13ae-949a-42e1-9366-afe32b5480f2"). InnerVolumeSpecName "kube-api-access-vkj79". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.388185 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d171fe05-fe49-46fb-9407-bdc1f9272d4b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d171fe05-fe49-46fb-9407-bdc1f9272d4b" (UID: "d171fe05-fe49-46fb-9407-bdc1f9272d4b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.388746 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae976069-cbe3-4195-8666-ec1e96e284e9-kube-api-access-h2jlf" (OuterVolumeSpecName: "kube-api-access-h2jlf") pod "ae976069-cbe3-4195-8666-ec1e96e284e9" (UID: "ae976069-cbe3-4195-8666-ec1e96e284e9"). InnerVolumeSpecName "kube-api-access-h2jlf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.401244 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae976069-cbe3-4195-8666-ec1e96e284e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae976069-cbe3-4195-8666-ec1e96e284e9" (UID: "ae976069-cbe3-4195-8666-ec1e96e284e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.437815 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a79afc8b-ca22-4e56-b7a9-d725b23e30ff" (UID: "a79afc8b-ca22-4e56-b7a9-d725b23e30ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.482965 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.482978 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07be13ae-949a-42e1-9366-afe32b5480f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07be13ae-949a-42e1-9366-afe32b5480f2" (UID: "07be13ae-949a-42e1-9366-afe32b5480f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.483006 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.483054 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zf76p\" (UniqueName: \"kubernetes.io/projected/d171fe05-fe49-46fb-9407-bdc1f9272d4b-kube-api-access-zf76p\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.483069 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f5qms\" (UniqueName: \"kubernetes.io/projected/a79afc8b-ca22-4e56-b7a9-d725b23e30ff-kube-api-access-f5qms\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.483080 5173 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d171fe05-fe49-46fb-9407-bdc1f9272d4b-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.483090 5173 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d171fe05-fe49-46fb-9407-bdc1f9272d4b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.483099 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2jlf\" (UniqueName: \"kubernetes.io/projected/ae976069-cbe3-4195-8666-ec1e96e284e9-kube-api-access-h2jlf\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.483110 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07be13ae-949a-42e1-9366-afe32b5480f2-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.483120 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae976069-cbe3-4195-8666-ec1e96e284e9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.483129 5173 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d171fe05-fe49-46fb-9407-bdc1f9272d4b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.483140 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae976069-cbe3-4195-8666-ec1e96e284e9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.483153 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vkj79\" (UniqueName: \"kubernetes.io/projected/07be13ae-949a-42e1-9366-afe32b5480f2-kube-api-access-vkj79\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.583993 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07be13ae-949a-42e1-9366-afe32b5480f2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.631336 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9cgv7"] Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.859621 5173 generic.go:358] "Generic (PLEG): container finished" podID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" containerID="d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace" exitCode=0 Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.859688 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" event={"ID":"d171fe05-fe49-46fb-9407-bdc1f9272d4b","Type":"ContainerDied","Data":"d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace"} Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.859744 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.860058 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z9d5g" event={"ID":"d171fe05-fe49-46fb-9407-bdc1f9272d4b","Type":"ContainerDied","Data":"cf78088fcbf995dd440b5a38e6c7b70e40c43df97d91d40a47449c204bb78e3c"} Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.860102 5173 scope.go:117] "RemoveContainer" containerID="d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.862208 5173 generic.go:358] "Generic (PLEG): container finished" podID="07be13ae-949a-42e1-9366-afe32b5480f2" containerID="253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2" exitCode=0 Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.862249 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmw7h" event={"ID":"07be13ae-949a-42e1-9366-afe32b5480f2","Type":"ContainerDied","Data":"253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2"} Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.862266 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmw7h" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.862283 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmw7h" event={"ID":"07be13ae-949a-42e1-9366-afe32b5480f2","Type":"ContainerDied","Data":"d05f78a8a403b8fecb379321c793c8aae2c2808b5e58de2aec66be001f4bc56c"} Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.877141 5173 generic.go:358] "Generic (PLEG): container finished" podID="8536effa-529d-4962-ab4e-0d8e1c3c4d93" containerID="c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480" exitCode=0 Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.877294 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95c8n" event={"ID":"8536effa-529d-4962-ab4e-0d8e1c3c4d93","Type":"ContainerDied","Data":"c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480"} Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.877324 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95c8n" event={"ID":"8536effa-529d-4962-ab4e-0d8e1c3c4d93","Type":"ContainerDied","Data":"662057c62b9ba536d7fba8c8abf4c9c4e5454fd522e6921852b1b30e6c9a6c38"} Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.877470 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95c8n" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.884660 5173 scope.go:117] "RemoveContainer" containerID="7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.892958 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" event={"ID":"45ab35d5-cdaf-43c4-abce-86d212e08388","Type":"ContainerStarted","Data":"1ea111637e3938769fdec3fa60c1c059bc75c73fa0f48607fdf1dd9c08bee232"} Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.893065 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" event={"ID":"45ab35d5-cdaf-43c4-abce-86d212e08388","Type":"ContainerStarted","Data":"6fc1b517326bfcc2acac0d777bdfa749b29ebf7641a062e63bee1913c06b3b6d"} Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.893534 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.894420 5173 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-9cgv7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" start-of-body= Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.894539 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" podUID="45ab35d5-cdaf-43c4-abce-86d212e08388" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.901142 5173 generic.go:358] "Generic (PLEG): container finished" podID="a79afc8b-ca22-4e56-b7a9-d725b23e30ff" containerID="e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc" exitCode=0 Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.901342 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq8bj" event={"ID":"a79afc8b-ca22-4e56-b7a9-d725b23e30ff","Type":"ContainerDied","Data":"e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc"} Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.901380 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mq8bj" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.901392 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mq8bj" event={"ID":"a79afc8b-ca22-4e56-b7a9-d725b23e30ff","Type":"ContainerDied","Data":"0a65410c30d86ba57bfb9bcc892dc6be200e0ff08e6ad8838cc87e62dbd1048e"} Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.903407 5173 generic.go:358] "Generic (PLEG): container finished" podID="ae976069-cbe3-4195-8666-ec1e96e284e9" containerID="2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce" exitCode=0 Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.903449 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72sct" event={"ID":"ae976069-cbe3-4195-8666-ec1e96e284e9","Type":"ContainerDied","Data":"2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce"} Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.903468 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-72sct" event={"ID":"ae976069-cbe3-4195-8666-ec1e96e284e9","Type":"ContainerDied","Data":"9bd79166387f38e1a18f1caeb0a42af2660a76a4aa4b0d358364631c9fa57b64"} Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.903581 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-72sct" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.916344 5173 scope.go:117] "RemoveContainer" containerID="d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace" Dec 09 14:18:10 crc kubenswrapper[5173]: E1209 14:18:10.916729 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace\": container with ID starting with d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace not found: ID does not exist" containerID="d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.916767 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace"} err="failed to get container status \"d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace\": rpc error: code = NotFound desc = could not find container \"d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace\": container with ID starting with d693b2da8bf37684ba475a73a111ed783ab5127d1358a69d9b3a571f49d75ace not found: ID does not exist" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.916790 5173 scope.go:117] "RemoveContainer" containerID="7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917" Dec 09 14:18:10 crc kubenswrapper[5173]: E1209 14:18:10.917085 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917\": container with ID starting with 7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917 not found: ID does not exist" containerID="7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.917115 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917"} err="failed to get container status \"7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917\": rpc error: code = NotFound desc = could not find container \"7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917\": container with ID starting with 7cb94fa5b2a5703a851fd5a637be2e6a5fa7d03100264b641663fd570f9e1917 not found: ID does not exist" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.917132 5173 scope.go:117] "RemoveContainer" containerID="253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.933626 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" podStartSLOduration=1.933606406 podStartE2EDuration="1.933606406s" podCreationTimestamp="2025-12-09 14:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:18:10.92795481 +0000 UTC m=+373.853237077" watchObservedRunningTime="2025-12-09 14:18:10.933606406 +0000 UTC m=+373.858888653" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.943459 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xmw7h"] Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.948386 5173 scope.go:117] "RemoveContainer" containerID="a08d9f6480d56871633c0866a5e863c615ebfff2b48646189773d679a33bf2db" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.948547 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xmw7h"] Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.965123 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-95c8n"] Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.970860 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-95c8n"] Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.977633 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z9d5g"] Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.981198 5173 scope.go:117] "RemoveContainer" containerID="dbb1d27d1272e2afa0f8d1141ddfba0c03885cc494cd66a4a349fd1299db39a1" Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.981584 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z9d5g"] Dec 09 14:18:10 crc kubenswrapper[5173]: I1209 14:18:10.998344 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mq8bj"] Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.005066 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mq8bj"] Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.009219 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-72sct"] Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.012717 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-72sct"] Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.014817 5173 scope.go:117] "RemoveContainer" containerID="253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2" Dec 09 14:18:11 crc kubenswrapper[5173]: E1209 14:18:11.015281 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2\": container with ID starting with 253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2 not found: ID does not exist" containerID="253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.015416 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2"} err="failed to get container status \"253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2\": rpc error: code = NotFound desc = could not find container \"253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2\": container with ID starting with 253df09fb93bf05100a2fd1ca2c374cd41410156d05fa74816c80d97de0d3fe2 not found: ID does not exist" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.015538 5173 scope.go:117] "RemoveContainer" containerID="a08d9f6480d56871633c0866a5e863c615ebfff2b48646189773d679a33bf2db" Dec 09 14:18:11 crc kubenswrapper[5173]: E1209 14:18:11.015914 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08d9f6480d56871633c0866a5e863c615ebfff2b48646189773d679a33bf2db\": container with ID starting with a08d9f6480d56871633c0866a5e863c615ebfff2b48646189773d679a33bf2db not found: ID does not exist" containerID="a08d9f6480d56871633c0866a5e863c615ebfff2b48646189773d679a33bf2db" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.015938 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08d9f6480d56871633c0866a5e863c615ebfff2b48646189773d679a33bf2db"} err="failed to get container status \"a08d9f6480d56871633c0866a5e863c615ebfff2b48646189773d679a33bf2db\": rpc error: code = NotFound desc = could not find container \"a08d9f6480d56871633c0866a5e863c615ebfff2b48646189773d679a33bf2db\": container with ID starting with a08d9f6480d56871633c0866a5e863c615ebfff2b48646189773d679a33bf2db not found: ID does not exist" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.015952 5173 scope.go:117] "RemoveContainer" containerID="dbb1d27d1272e2afa0f8d1141ddfba0c03885cc494cd66a4a349fd1299db39a1" Dec 09 14:18:11 crc kubenswrapper[5173]: E1209 14:18:11.016158 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbb1d27d1272e2afa0f8d1141ddfba0c03885cc494cd66a4a349fd1299db39a1\": container with ID starting with dbb1d27d1272e2afa0f8d1141ddfba0c03885cc494cd66a4a349fd1299db39a1 not found: ID does not exist" containerID="dbb1d27d1272e2afa0f8d1141ddfba0c03885cc494cd66a4a349fd1299db39a1" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.016177 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbb1d27d1272e2afa0f8d1141ddfba0c03885cc494cd66a4a349fd1299db39a1"} err="failed to get container status \"dbb1d27d1272e2afa0f8d1141ddfba0c03885cc494cd66a4a349fd1299db39a1\": rpc error: code = NotFound desc = could not find container \"dbb1d27d1272e2afa0f8d1141ddfba0c03885cc494cd66a4a349fd1299db39a1\": container with ID starting with dbb1d27d1272e2afa0f8d1141ddfba0c03885cc494cd66a4a349fd1299db39a1 not found: ID does not exist" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.016188 5173 scope.go:117] "RemoveContainer" containerID="c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.028899 5173 scope.go:117] "RemoveContainer" containerID="1a32b04719295f2b2643e8e0f85842fd69b29ea69ab25859057863fd6f2731a4" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.042465 5173 scope.go:117] "RemoveContainer" containerID="e753700835c5c0c431be571b87dc03786d70f6041f01d18a5b46bddd2fc8d2d6" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.060265 5173 scope.go:117] "RemoveContainer" containerID="c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480" Dec 09 14:18:11 crc kubenswrapper[5173]: E1209 14:18:11.060799 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480\": container with ID starting with c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480 not found: ID does not exist" containerID="c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.060849 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480"} err="failed to get container status \"c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480\": rpc error: code = NotFound desc = could not find container \"c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480\": container with ID starting with c2567827565dba07d67fb187c07fd4ca6d10f97f24e4b0a560ddb67ff6dd1480 not found: ID does not exist" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.060881 5173 scope.go:117] "RemoveContainer" containerID="1a32b04719295f2b2643e8e0f85842fd69b29ea69ab25859057863fd6f2731a4" Dec 09 14:18:11 crc kubenswrapper[5173]: E1209 14:18:11.061260 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a32b04719295f2b2643e8e0f85842fd69b29ea69ab25859057863fd6f2731a4\": container with ID starting with 1a32b04719295f2b2643e8e0f85842fd69b29ea69ab25859057863fd6f2731a4 not found: ID does not exist" containerID="1a32b04719295f2b2643e8e0f85842fd69b29ea69ab25859057863fd6f2731a4" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.061302 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a32b04719295f2b2643e8e0f85842fd69b29ea69ab25859057863fd6f2731a4"} err="failed to get container status \"1a32b04719295f2b2643e8e0f85842fd69b29ea69ab25859057863fd6f2731a4\": rpc error: code = NotFound desc = could not find container \"1a32b04719295f2b2643e8e0f85842fd69b29ea69ab25859057863fd6f2731a4\": container with ID starting with 1a32b04719295f2b2643e8e0f85842fd69b29ea69ab25859057863fd6f2731a4 not found: ID does not exist" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.061327 5173 scope.go:117] "RemoveContainer" containerID="e753700835c5c0c431be571b87dc03786d70f6041f01d18a5b46bddd2fc8d2d6" Dec 09 14:18:11 crc kubenswrapper[5173]: E1209 14:18:11.062047 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e753700835c5c0c431be571b87dc03786d70f6041f01d18a5b46bddd2fc8d2d6\": container with ID starting with e753700835c5c0c431be571b87dc03786d70f6041f01d18a5b46bddd2fc8d2d6 not found: ID does not exist" containerID="e753700835c5c0c431be571b87dc03786d70f6041f01d18a5b46bddd2fc8d2d6" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.062156 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e753700835c5c0c431be571b87dc03786d70f6041f01d18a5b46bddd2fc8d2d6"} err="failed to get container status \"e753700835c5c0c431be571b87dc03786d70f6041f01d18a5b46bddd2fc8d2d6\": rpc error: code = NotFound desc = could not find container \"e753700835c5c0c431be571b87dc03786d70f6041f01d18a5b46bddd2fc8d2d6\": container with ID starting with e753700835c5c0c431be571b87dc03786d70f6041f01d18a5b46bddd2fc8d2d6 not found: ID does not exist" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.062249 5173 scope.go:117] "RemoveContainer" containerID="e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.075673 5173 scope.go:117] "RemoveContainer" containerID="67225469e0f1612f0641a22816052540e2d74a5fc97e3b66321bf5ed6a0fc8e1" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.088729 5173 scope.go:117] "RemoveContainer" containerID="b888bd4aa823d00fb8ae9d954bd06f242d7dbf04912d08a7f07c3d48b38e6583" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.102051 5173 scope.go:117] "RemoveContainer" containerID="e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc" Dec 09 14:18:11 crc kubenswrapper[5173]: E1209 14:18:11.103457 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc\": container with ID starting with e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc not found: ID does not exist" containerID="e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.103510 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc"} err="failed to get container status \"e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc\": rpc error: code = NotFound desc = could not find container \"e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc\": container with ID starting with e70547d63b919901fa55435fada87003be15aa53a66d4781392f3192b1aa43fc not found: ID does not exist" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.103537 5173 scope.go:117] "RemoveContainer" containerID="67225469e0f1612f0641a22816052540e2d74a5fc97e3b66321bf5ed6a0fc8e1" Dec 09 14:18:11 crc kubenswrapper[5173]: E1209 14:18:11.103775 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67225469e0f1612f0641a22816052540e2d74a5fc97e3b66321bf5ed6a0fc8e1\": container with ID starting with 67225469e0f1612f0641a22816052540e2d74a5fc97e3b66321bf5ed6a0fc8e1 not found: ID does not exist" containerID="67225469e0f1612f0641a22816052540e2d74a5fc97e3b66321bf5ed6a0fc8e1" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.103796 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67225469e0f1612f0641a22816052540e2d74a5fc97e3b66321bf5ed6a0fc8e1"} err="failed to get container status \"67225469e0f1612f0641a22816052540e2d74a5fc97e3b66321bf5ed6a0fc8e1\": rpc error: code = NotFound desc = could not find container \"67225469e0f1612f0641a22816052540e2d74a5fc97e3b66321bf5ed6a0fc8e1\": container with ID starting with 67225469e0f1612f0641a22816052540e2d74a5fc97e3b66321bf5ed6a0fc8e1 not found: ID does not exist" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.103807 5173 scope.go:117] "RemoveContainer" containerID="b888bd4aa823d00fb8ae9d954bd06f242d7dbf04912d08a7f07c3d48b38e6583" Dec 09 14:18:11 crc kubenswrapper[5173]: E1209 14:18:11.103956 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b888bd4aa823d00fb8ae9d954bd06f242d7dbf04912d08a7f07c3d48b38e6583\": container with ID starting with b888bd4aa823d00fb8ae9d954bd06f242d7dbf04912d08a7f07c3d48b38e6583 not found: ID does not exist" containerID="b888bd4aa823d00fb8ae9d954bd06f242d7dbf04912d08a7f07c3d48b38e6583" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.103977 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b888bd4aa823d00fb8ae9d954bd06f242d7dbf04912d08a7f07c3d48b38e6583"} err="failed to get container status \"b888bd4aa823d00fb8ae9d954bd06f242d7dbf04912d08a7f07c3d48b38e6583\": rpc error: code = NotFound desc = could not find container \"b888bd4aa823d00fb8ae9d954bd06f242d7dbf04912d08a7f07c3d48b38e6583\": container with ID starting with b888bd4aa823d00fb8ae9d954bd06f242d7dbf04912d08a7f07c3d48b38e6583 not found: ID does not exist" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.103994 5173 scope.go:117] "RemoveContainer" containerID="2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.124612 5173 scope.go:117] "RemoveContainer" containerID="58275f4a95cc3b62c1e3fd0940879b978f28ca47c94d1166736bc5c882ffc913" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.144396 5173 scope.go:117] "RemoveContainer" containerID="f31f8f75bf829d426b46a72c4b8b191b6a9ab1d10bf4edc620f1cdca3648f4e5" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.155945 5173 scope.go:117] "RemoveContainer" containerID="2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce" Dec 09 14:18:11 crc kubenswrapper[5173]: E1209 14:18:11.156290 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce\": container with ID starting with 2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce not found: ID does not exist" containerID="2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.156317 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce"} err="failed to get container status \"2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce\": rpc error: code = NotFound desc = could not find container \"2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce\": container with ID starting with 2f5d285072af1c1e7dc639151ecd13906fb57bfb974c0fc1de48798d8268cbce not found: ID does not exist" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.156338 5173 scope.go:117] "RemoveContainer" containerID="58275f4a95cc3b62c1e3fd0940879b978f28ca47c94d1166736bc5c882ffc913" Dec 09 14:18:11 crc kubenswrapper[5173]: E1209 14:18:11.156634 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58275f4a95cc3b62c1e3fd0940879b978f28ca47c94d1166736bc5c882ffc913\": container with ID starting with 58275f4a95cc3b62c1e3fd0940879b978f28ca47c94d1166736bc5c882ffc913 not found: ID does not exist" containerID="58275f4a95cc3b62c1e3fd0940879b978f28ca47c94d1166736bc5c882ffc913" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.156655 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58275f4a95cc3b62c1e3fd0940879b978f28ca47c94d1166736bc5c882ffc913"} err="failed to get container status \"58275f4a95cc3b62c1e3fd0940879b978f28ca47c94d1166736bc5c882ffc913\": rpc error: code = NotFound desc = could not find container \"58275f4a95cc3b62c1e3fd0940879b978f28ca47c94d1166736bc5c882ffc913\": container with ID starting with 58275f4a95cc3b62c1e3fd0940879b978f28ca47c94d1166736bc5c882ffc913 not found: ID does not exist" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.156667 5173 scope.go:117] "RemoveContainer" containerID="f31f8f75bf829d426b46a72c4b8b191b6a9ab1d10bf4edc620f1cdca3648f4e5" Dec 09 14:18:11 crc kubenswrapper[5173]: E1209 14:18:11.156899 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31f8f75bf829d426b46a72c4b8b191b6a9ab1d10bf4edc620f1cdca3648f4e5\": container with ID starting with f31f8f75bf829d426b46a72c4b8b191b6a9ab1d10bf4edc620f1cdca3648f4e5 not found: ID does not exist" containerID="f31f8f75bf829d426b46a72c4b8b191b6a9ab1d10bf4edc620f1cdca3648f4e5" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.156921 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31f8f75bf829d426b46a72c4b8b191b6a9ab1d10bf4edc620f1cdca3648f4e5"} err="failed to get container status \"f31f8f75bf829d426b46a72c4b8b191b6a9ab1d10bf4edc620f1cdca3648f4e5\": rpc error: code = NotFound desc = could not find container \"f31f8f75bf829d426b46a72c4b8b191b6a9ab1d10bf4edc620f1cdca3648f4e5\": container with ID starting with f31f8f75bf829d426b46a72c4b8b191b6a9ab1d10bf4edc620f1cdca3648f4e5 not found: ID does not exist" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.878642 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07be13ae-949a-42e1-9366-afe32b5480f2" path="/var/lib/kubelet/pods/07be13ae-949a-42e1-9366-afe32b5480f2/volumes" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.882906 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8536effa-529d-4962-ab4e-0d8e1c3c4d93" path="/var/lib/kubelet/pods/8536effa-529d-4962-ab4e-0d8e1c3c4d93/volumes" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.884911 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79afc8b-ca22-4e56-b7a9-d725b23e30ff" path="/var/lib/kubelet/pods/a79afc8b-ca22-4e56-b7a9-d725b23e30ff/volumes" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.887299 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae976069-cbe3-4195-8666-ec1e96e284e9" path="/var/lib/kubelet/pods/ae976069-cbe3-4195-8666-ec1e96e284e9/volumes" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.889040 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" path="/var/lib/kubelet/pods/d171fe05-fe49-46fb-9407-bdc1f9272d4b/volumes" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.915769 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9cgv7" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.968345 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bcwrn"] Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.969466 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" containerName="marketplace-operator" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.969550 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" containerName="marketplace-operator" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.969616 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07be13ae-949a-42e1-9366-afe32b5480f2" containerName="extract-utilities" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.969682 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="07be13ae-949a-42e1-9366-afe32b5480f2" containerName="extract-utilities" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.969739 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07be13ae-949a-42e1-9366-afe32b5480f2" containerName="extract-content" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.969797 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="07be13ae-949a-42e1-9366-afe32b5480f2" containerName="extract-content" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.969857 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07be13ae-949a-42e1-9366-afe32b5480f2" containerName="registry-server" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.969933 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="07be13ae-949a-42e1-9366-afe32b5480f2" containerName="registry-server" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970006 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a79afc8b-ca22-4e56-b7a9-d725b23e30ff" containerName="extract-content" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970063 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79afc8b-ca22-4e56-b7a9-d725b23e30ff" containerName="extract-content" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970169 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8536effa-529d-4962-ab4e-0d8e1c3c4d93" containerName="extract-utilities" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970228 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="8536effa-529d-4962-ab4e-0d8e1c3c4d93" containerName="extract-utilities" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970302 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae976069-cbe3-4195-8666-ec1e96e284e9" containerName="extract-content" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970378 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae976069-cbe3-4195-8666-ec1e96e284e9" containerName="extract-content" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970434 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae976069-cbe3-4195-8666-ec1e96e284e9" containerName="registry-server" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970482 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae976069-cbe3-4195-8666-ec1e96e284e9" containerName="registry-server" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970561 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8536effa-529d-4962-ab4e-0d8e1c3c4d93" containerName="extract-content" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970617 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="8536effa-529d-4962-ab4e-0d8e1c3c4d93" containerName="extract-content" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970674 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a79afc8b-ca22-4e56-b7a9-d725b23e30ff" containerName="extract-utilities" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970726 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79afc8b-ca22-4e56-b7a9-d725b23e30ff" containerName="extract-utilities" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970783 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8536effa-529d-4962-ab4e-0d8e1c3c4d93" containerName="registry-server" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970837 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="8536effa-529d-4962-ab4e-0d8e1c3c4d93" containerName="registry-server" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970898 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ae976069-cbe3-4195-8666-ec1e96e284e9" containerName="extract-utilities" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.970947 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae976069-cbe3-4195-8666-ec1e96e284e9" containerName="extract-utilities" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.971005 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a79afc8b-ca22-4e56-b7a9-d725b23e30ff" containerName="registry-server" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.971062 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79afc8b-ca22-4e56-b7a9-d725b23e30ff" containerName="registry-server" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.971193 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" containerName="marketplace-operator" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.971260 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="8536effa-529d-4962-ab4e-0d8e1c3c4d93" containerName="registry-server" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.972371 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" containerName="marketplace-operator" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.972446 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="a79afc8b-ca22-4e56-b7a9-d725b23e30ff" containerName="registry-server" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.972502 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="07be13ae-949a-42e1-9366-afe32b5480f2" containerName="registry-server" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.972575 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="ae976069-cbe3-4195-8666-ec1e96e284e9" containerName="registry-server" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.972743 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" containerName="marketplace-operator" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.972806 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="d171fe05-fe49-46fb-9407-bdc1f9272d4b" containerName="marketplace-operator" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.982515 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bcwrn"] Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.982830 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:11 crc kubenswrapper[5173]: I1209 14:18:11.984942 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.111888 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeddd4d7-3359-4c58-9f20-fb21ce3ab252-catalog-content\") pod \"certified-operators-bcwrn\" (UID: \"aeddd4d7-3359-4c58-9f20-fb21ce3ab252\") " pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.112197 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2hht\" (UniqueName: \"kubernetes.io/projected/aeddd4d7-3359-4c58-9f20-fb21ce3ab252-kube-api-access-p2hht\") pod \"certified-operators-bcwrn\" (UID: \"aeddd4d7-3359-4c58-9f20-fb21ce3ab252\") " pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.112243 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeddd4d7-3359-4c58-9f20-fb21ce3ab252-utilities\") pod \"certified-operators-bcwrn\" (UID: \"aeddd4d7-3359-4c58-9f20-fb21ce3ab252\") " pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.162262 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kqbwr"] Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.166853 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.168728 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.175044 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kqbwr"] Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.213809 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeddd4d7-3359-4c58-9f20-fb21ce3ab252-utilities\") pod \"certified-operators-bcwrn\" (UID: \"aeddd4d7-3359-4c58-9f20-fb21ce3ab252\") " pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.213918 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeddd4d7-3359-4c58-9f20-fb21ce3ab252-catalog-content\") pod \"certified-operators-bcwrn\" (UID: \"aeddd4d7-3359-4c58-9f20-fb21ce3ab252\") " pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.213950 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p2hht\" (UniqueName: \"kubernetes.io/projected/aeddd4d7-3359-4c58-9f20-fb21ce3ab252-kube-api-access-p2hht\") pod \"certified-operators-bcwrn\" (UID: \"aeddd4d7-3359-4c58-9f20-fb21ce3ab252\") " pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.214313 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeddd4d7-3359-4c58-9f20-fb21ce3ab252-catalog-content\") pod \"certified-operators-bcwrn\" (UID: \"aeddd4d7-3359-4c58-9f20-fb21ce3ab252\") " pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.214556 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeddd4d7-3359-4c58-9f20-fb21ce3ab252-utilities\") pod \"certified-operators-bcwrn\" (UID: \"aeddd4d7-3359-4c58-9f20-fb21ce3ab252\") " pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.243859 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2hht\" (UniqueName: \"kubernetes.io/projected/aeddd4d7-3359-4c58-9f20-fb21ce3ab252-kube-api-access-p2hht\") pod \"certified-operators-bcwrn\" (UID: \"aeddd4d7-3359-4c58-9f20-fb21ce3ab252\") " pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.306431 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.314844 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9vcq\" (UniqueName: \"kubernetes.io/projected/19891dac-13c6-4bdc-94a2-a1733f5814e4-kube-api-access-b9vcq\") pod \"community-operators-kqbwr\" (UID: \"19891dac-13c6-4bdc-94a2-a1733f5814e4\") " pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.314882 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19891dac-13c6-4bdc-94a2-a1733f5814e4-catalog-content\") pod \"community-operators-kqbwr\" (UID: \"19891dac-13c6-4bdc-94a2-a1733f5814e4\") " pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.315033 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19891dac-13c6-4bdc-94a2-a1733f5814e4-utilities\") pod \"community-operators-kqbwr\" (UID: \"19891dac-13c6-4bdc-94a2-a1733f5814e4\") " pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.415988 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b9vcq\" (UniqueName: \"kubernetes.io/projected/19891dac-13c6-4bdc-94a2-a1733f5814e4-kube-api-access-b9vcq\") pod \"community-operators-kqbwr\" (UID: \"19891dac-13c6-4bdc-94a2-a1733f5814e4\") " pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.416031 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19891dac-13c6-4bdc-94a2-a1733f5814e4-catalog-content\") pod \"community-operators-kqbwr\" (UID: \"19891dac-13c6-4bdc-94a2-a1733f5814e4\") " pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.416084 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19891dac-13c6-4bdc-94a2-a1733f5814e4-utilities\") pod \"community-operators-kqbwr\" (UID: \"19891dac-13c6-4bdc-94a2-a1733f5814e4\") " pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.416501 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19891dac-13c6-4bdc-94a2-a1733f5814e4-utilities\") pod \"community-operators-kqbwr\" (UID: \"19891dac-13c6-4bdc-94a2-a1733f5814e4\") " pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.416913 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19891dac-13c6-4bdc-94a2-a1733f5814e4-catalog-content\") pod \"community-operators-kqbwr\" (UID: \"19891dac-13c6-4bdc-94a2-a1733f5814e4\") " pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.435028 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9vcq\" (UniqueName: \"kubernetes.io/projected/19891dac-13c6-4bdc-94a2-a1733f5814e4-kube-api-access-b9vcq\") pod \"community-operators-kqbwr\" (UID: \"19891dac-13c6-4bdc-94a2-a1733f5814e4\") " pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.498520 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.674395 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bcwrn"] Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.864006 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kqbwr"] Dec 09 14:18:12 crc kubenswrapper[5173]: W1209 14:18:12.874144 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19891dac_13c6_4bdc_94a2_a1733f5814e4.slice/crio-ab027a0984d90cf675c54edf050b3a14484d4d3a0af32a382fca6f38cce1bd5e WatchSource:0}: Error finding container ab027a0984d90cf675c54edf050b3a14484d4d3a0af32a382fca6f38cce1bd5e: Status 404 returned error can't find the container with id ab027a0984d90cf675c54edf050b3a14484d4d3a0af32a382fca6f38cce1bd5e Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.919261 5173 generic.go:358] "Generic (PLEG): container finished" podID="aeddd4d7-3359-4c58-9f20-fb21ce3ab252" containerID="825e2d5cff91dcbeea29afa04e72121bf6615279c1121747b6f2956f43fa5dc9" exitCode=0 Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.919498 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcwrn" event={"ID":"aeddd4d7-3359-4c58-9f20-fb21ce3ab252","Type":"ContainerDied","Data":"825e2d5cff91dcbeea29afa04e72121bf6615279c1121747b6f2956f43fa5dc9"} Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.919545 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcwrn" event={"ID":"aeddd4d7-3359-4c58-9f20-fb21ce3ab252","Type":"ContainerStarted","Data":"50815ad52f524ac5a50006e71ffc7a8c4dd7be4cd2b1e9e39d5d80f6a06148cb"} Dec 09 14:18:12 crc kubenswrapper[5173]: I1209 14:18:12.922166 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqbwr" event={"ID":"19891dac-13c6-4bdc-94a2-a1733f5814e4","Type":"ContainerStarted","Data":"ab027a0984d90cf675c54edf050b3a14484d4d3a0af32a382fca6f38cce1bd5e"} Dec 09 14:18:13 crc kubenswrapper[5173]: I1209 14:18:13.928119 5173 generic.go:358] "Generic (PLEG): container finished" podID="aeddd4d7-3359-4c58-9f20-fb21ce3ab252" containerID="5137856cc1d9f08535f11719f98dc9f9607a106f823532195bdcad0ba3daa732" exitCode=0 Dec 09 14:18:13 crc kubenswrapper[5173]: I1209 14:18:13.928266 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcwrn" event={"ID":"aeddd4d7-3359-4c58-9f20-fb21ce3ab252","Type":"ContainerDied","Data":"5137856cc1d9f08535f11719f98dc9f9607a106f823532195bdcad0ba3daa732"} Dec 09 14:18:13 crc kubenswrapper[5173]: I1209 14:18:13.930089 5173 generic.go:358] "Generic (PLEG): container finished" podID="19891dac-13c6-4bdc-94a2-a1733f5814e4" containerID="a3a19b4060b0725d5d3cffd9e623f9e3e624f06ef5b870cffc24ab5949bba0f4" exitCode=0 Dec 09 14:18:13 crc kubenswrapper[5173]: I1209 14:18:13.930122 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqbwr" event={"ID":"19891dac-13c6-4bdc-94a2-a1733f5814e4","Type":"ContainerDied","Data":"a3a19b4060b0725d5d3cffd9e623f9e3e624f06ef5b870cffc24ab5949bba0f4"} Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.361617 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sdfwh"] Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.368556 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.370643 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.371590 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdfwh"] Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.439442 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f44083e3-315f-45c4-8753-b2196a9848a9-catalog-content\") pod \"redhat-marketplace-sdfwh\" (UID: \"f44083e3-315f-45c4-8753-b2196a9848a9\") " pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.439501 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f44083e3-315f-45c4-8753-b2196a9848a9-utilities\") pod \"redhat-marketplace-sdfwh\" (UID: \"f44083e3-315f-45c4-8753-b2196a9848a9\") " pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.439668 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d64fz\" (UniqueName: \"kubernetes.io/projected/f44083e3-315f-45c4-8753-b2196a9848a9-kube-api-access-d64fz\") pod \"redhat-marketplace-sdfwh\" (UID: \"f44083e3-315f-45c4-8753-b2196a9848a9\") " pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.540794 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d64fz\" (UniqueName: \"kubernetes.io/projected/f44083e3-315f-45c4-8753-b2196a9848a9-kube-api-access-d64fz\") pod \"redhat-marketplace-sdfwh\" (UID: \"f44083e3-315f-45c4-8753-b2196a9848a9\") " pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.541095 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f44083e3-315f-45c4-8753-b2196a9848a9-catalog-content\") pod \"redhat-marketplace-sdfwh\" (UID: \"f44083e3-315f-45c4-8753-b2196a9848a9\") " pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.541123 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f44083e3-315f-45c4-8753-b2196a9848a9-utilities\") pod \"redhat-marketplace-sdfwh\" (UID: \"f44083e3-315f-45c4-8753-b2196a9848a9\") " pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.541655 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f44083e3-315f-45c4-8753-b2196a9848a9-utilities\") pod \"redhat-marketplace-sdfwh\" (UID: \"f44083e3-315f-45c4-8753-b2196a9848a9\") " pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.541663 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f44083e3-315f-45c4-8753-b2196a9848a9-catalog-content\") pod \"redhat-marketplace-sdfwh\" (UID: \"f44083e3-315f-45c4-8753-b2196a9848a9\") " pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.562207 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lxpmn"] Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.569508 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.571969 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.572888 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d64fz\" (UniqueName: \"kubernetes.io/projected/f44083e3-315f-45c4-8753-b2196a9848a9-kube-api-access-d64fz\") pod \"redhat-marketplace-sdfwh\" (UID: \"f44083e3-315f-45c4-8753-b2196a9848a9\") " pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.573310 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lxpmn"] Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.642262 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543fad97-2c84-4bef-8ad0-c2df925668a9-catalog-content\") pod \"redhat-operators-lxpmn\" (UID: \"543fad97-2c84-4bef-8ad0-c2df925668a9\") " pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.642326 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxgjm\" (UniqueName: \"kubernetes.io/projected/543fad97-2c84-4bef-8ad0-c2df925668a9-kube-api-access-pxgjm\") pod \"redhat-operators-lxpmn\" (UID: \"543fad97-2c84-4bef-8ad0-c2df925668a9\") " pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.642476 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543fad97-2c84-4bef-8ad0-c2df925668a9-utilities\") pod \"redhat-operators-lxpmn\" (UID: \"543fad97-2c84-4bef-8ad0-c2df925668a9\") " pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.682606 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.743830 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543fad97-2c84-4bef-8ad0-c2df925668a9-catalog-content\") pod \"redhat-operators-lxpmn\" (UID: \"543fad97-2c84-4bef-8ad0-c2df925668a9\") " pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.743882 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pxgjm\" (UniqueName: \"kubernetes.io/projected/543fad97-2c84-4bef-8ad0-c2df925668a9-kube-api-access-pxgjm\") pod \"redhat-operators-lxpmn\" (UID: \"543fad97-2c84-4bef-8ad0-c2df925668a9\") " pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.743915 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543fad97-2c84-4bef-8ad0-c2df925668a9-utilities\") pod \"redhat-operators-lxpmn\" (UID: \"543fad97-2c84-4bef-8ad0-c2df925668a9\") " pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.744494 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543fad97-2c84-4bef-8ad0-c2df925668a9-utilities\") pod \"redhat-operators-lxpmn\" (UID: \"543fad97-2c84-4bef-8ad0-c2df925668a9\") " pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.744782 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543fad97-2c84-4bef-8ad0-c2df925668a9-catalog-content\") pod \"redhat-operators-lxpmn\" (UID: \"543fad97-2c84-4bef-8ad0-c2df925668a9\") " pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.767536 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxgjm\" (UniqueName: \"kubernetes.io/projected/543fad97-2c84-4bef-8ad0-c2df925668a9-kube-api-access-pxgjm\") pod \"redhat-operators-lxpmn\" (UID: \"543fad97-2c84-4bef-8ad0-c2df925668a9\") " pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.895643 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.936806 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcwrn" event={"ID":"aeddd4d7-3359-4c58-9f20-fb21ce3ab252","Type":"ContainerStarted","Data":"d8d7a99a585e4cbe7abc586882c5d7e90b9625d98f9c5a9d6795854a1eaf7937"} Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.939067 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqbwr" event={"ID":"19891dac-13c6-4bdc-94a2-a1733f5814e4","Type":"ContainerStarted","Data":"4a377d5b70a35afa75a532efca4301497aafc63ea6f28b8e0dc963d727e46cd6"} Dec 09 14:18:14 crc kubenswrapper[5173]: I1209 14:18:14.990195 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bcwrn" podStartSLOduration=3.241607947 podStartE2EDuration="3.990175315s" podCreationTimestamp="2025-12-09 14:18:11 +0000 UTC" firstStartedPulling="2025-12-09 14:18:12.920036595 +0000 UTC m=+375.845318842" lastFinishedPulling="2025-12-09 14:18:13.668603963 +0000 UTC m=+376.593886210" observedRunningTime="2025-12-09 14:18:14.953733662 +0000 UTC m=+377.879015929" watchObservedRunningTime="2025-12-09 14:18:14.990175315 +0000 UTC m=+377.915457562" Dec 09 14:18:15 crc kubenswrapper[5173]: I1209 14:18:15.081352 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdfwh"] Dec 09 14:18:15 crc kubenswrapper[5173]: W1209 14:18:15.089332 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf44083e3_315f_45c4_8753_b2196a9848a9.slice/crio-9f17405e098dbb51bf199d62fb89fefb7548d8ddda24afb4338b969c7ba273be WatchSource:0}: Error finding container 9f17405e098dbb51bf199d62fb89fefb7548d8ddda24afb4338b969c7ba273be: Status 404 returned error can't find the container with id 9f17405e098dbb51bf199d62fb89fefb7548d8ddda24afb4338b969c7ba273be Dec 09 14:18:15 crc kubenswrapper[5173]: I1209 14:18:15.294513 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lxpmn"] Dec 09 14:18:15 crc kubenswrapper[5173]: W1209 14:18:15.325921 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod543fad97_2c84_4bef_8ad0_c2df925668a9.slice/crio-34268dcdb3f8f9f42baae78cdc4016585de2afe28f0348341af52e3a867f3be0 WatchSource:0}: Error finding container 34268dcdb3f8f9f42baae78cdc4016585de2afe28f0348341af52e3a867f3be0: Status 404 returned error can't find the container with id 34268dcdb3f8f9f42baae78cdc4016585de2afe28f0348341af52e3a867f3be0 Dec 09 14:18:15 crc kubenswrapper[5173]: I1209 14:18:15.945530 5173 generic.go:358] "Generic (PLEG): container finished" podID="543fad97-2c84-4bef-8ad0-c2df925668a9" containerID="a2b49edb44d4e63d56bd6c3d9d7fb25f4817f37a8109fe9e856929a61fad90d9" exitCode=0 Dec 09 14:18:15 crc kubenswrapper[5173]: I1209 14:18:15.945588 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lxpmn" event={"ID":"543fad97-2c84-4bef-8ad0-c2df925668a9","Type":"ContainerDied","Data":"a2b49edb44d4e63d56bd6c3d9d7fb25f4817f37a8109fe9e856929a61fad90d9"} Dec 09 14:18:15 crc kubenswrapper[5173]: I1209 14:18:15.945641 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lxpmn" event={"ID":"543fad97-2c84-4bef-8ad0-c2df925668a9","Type":"ContainerStarted","Data":"34268dcdb3f8f9f42baae78cdc4016585de2afe28f0348341af52e3a867f3be0"} Dec 09 14:18:15 crc kubenswrapper[5173]: I1209 14:18:15.947144 5173 generic.go:358] "Generic (PLEG): container finished" podID="f44083e3-315f-45c4-8753-b2196a9848a9" containerID="085782f8c47de5fb97a1412d0bcf61e7d174866191071adb82db8c31c1c65d71" exitCode=0 Dec 09 14:18:15 crc kubenswrapper[5173]: I1209 14:18:15.947233 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdfwh" event={"ID":"f44083e3-315f-45c4-8753-b2196a9848a9","Type":"ContainerDied","Data":"085782f8c47de5fb97a1412d0bcf61e7d174866191071adb82db8c31c1c65d71"} Dec 09 14:18:15 crc kubenswrapper[5173]: I1209 14:18:15.947267 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdfwh" event={"ID":"f44083e3-315f-45c4-8753-b2196a9848a9","Type":"ContainerStarted","Data":"9f17405e098dbb51bf199d62fb89fefb7548d8ddda24afb4338b969c7ba273be"} Dec 09 14:18:15 crc kubenswrapper[5173]: I1209 14:18:15.949711 5173 generic.go:358] "Generic (PLEG): container finished" podID="19891dac-13c6-4bdc-94a2-a1733f5814e4" containerID="4a377d5b70a35afa75a532efca4301497aafc63ea6f28b8e0dc963d727e46cd6" exitCode=0 Dec 09 14:18:15 crc kubenswrapper[5173]: I1209 14:18:15.950187 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqbwr" event={"ID":"19891dac-13c6-4bdc-94a2-a1733f5814e4","Type":"ContainerDied","Data":"4a377d5b70a35afa75a532efca4301497aafc63ea6f28b8e0dc963d727e46cd6"} Dec 09 14:18:16 crc kubenswrapper[5173]: I1209 14:18:16.956243 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqbwr" event={"ID":"19891dac-13c6-4bdc-94a2-a1733f5814e4","Type":"ContainerStarted","Data":"21ad37c7563d21a45e9094751f20d84d4202ab01b1efeb4ae14421ca4c9cd289"} Dec 09 14:18:16 crc kubenswrapper[5173]: I1209 14:18:16.957827 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lxpmn" event={"ID":"543fad97-2c84-4bef-8ad0-c2df925668a9","Type":"ContainerStarted","Data":"5d7a25f33ed0cdeb94c92cd5e758d56f0a8d2aa0d3205e76fe89bc19683dc756"} Dec 09 14:18:16 crc kubenswrapper[5173]: I1209 14:18:16.958957 5173 generic.go:358] "Generic (PLEG): container finished" podID="f44083e3-315f-45c4-8753-b2196a9848a9" containerID="c62c560c9e8f734ab12defa07b6ee02ef98284cda460c7b16d090111573b78d7" exitCode=0 Dec 09 14:18:16 crc kubenswrapper[5173]: I1209 14:18:16.959096 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdfwh" event={"ID":"f44083e3-315f-45c4-8753-b2196a9848a9","Type":"ContainerDied","Data":"c62c560c9e8f734ab12defa07b6ee02ef98284cda460c7b16d090111573b78d7"} Dec 09 14:18:17 crc kubenswrapper[5173]: I1209 14:18:17.013167 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kqbwr" podStartSLOduration=4.207006012 podStartE2EDuration="5.013144189s" podCreationTimestamp="2025-12-09 14:18:12 +0000 UTC" firstStartedPulling="2025-12-09 14:18:13.931198242 +0000 UTC m=+376.856480499" lastFinishedPulling="2025-12-09 14:18:14.737336429 +0000 UTC m=+377.662618676" observedRunningTime="2025-12-09 14:18:16.981787555 +0000 UTC m=+379.907069822" watchObservedRunningTime="2025-12-09 14:18:17.013144189 +0000 UTC m=+379.938426436" Dec 09 14:18:17 crc kubenswrapper[5173]: I1209 14:18:17.965958 5173 generic.go:358] "Generic (PLEG): container finished" podID="543fad97-2c84-4bef-8ad0-c2df925668a9" containerID="5d7a25f33ed0cdeb94c92cd5e758d56f0a8d2aa0d3205e76fe89bc19683dc756" exitCode=0 Dec 09 14:18:17 crc kubenswrapper[5173]: I1209 14:18:17.966007 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lxpmn" event={"ID":"543fad97-2c84-4bef-8ad0-c2df925668a9","Type":"ContainerDied","Data":"5d7a25f33ed0cdeb94c92cd5e758d56f0a8d2aa0d3205e76fe89bc19683dc756"} Dec 09 14:18:17 crc kubenswrapper[5173]: I1209 14:18:17.968742 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdfwh" event={"ID":"f44083e3-315f-45c4-8753-b2196a9848a9","Type":"ContainerStarted","Data":"20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c"} Dec 09 14:18:17 crc kubenswrapper[5173]: I1209 14:18:17.998205 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sdfwh" podStartSLOduration=3.276286505 podStartE2EDuration="3.998190405s" podCreationTimestamp="2025-12-09 14:18:14 +0000 UTC" firstStartedPulling="2025-12-09 14:18:15.948024336 +0000 UTC m=+378.873306583" lastFinishedPulling="2025-12-09 14:18:16.669928235 +0000 UTC m=+379.595210483" observedRunningTime="2025-12-09 14:18:17.996787902 +0000 UTC m=+380.922070169" watchObservedRunningTime="2025-12-09 14:18:17.998190405 +0000 UTC m=+380.923472672" Dec 09 14:18:18 crc kubenswrapper[5173]: I1209 14:18:18.978182 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lxpmn" event={"ID":"543fad97-2c84-4bef-8ad0-c2df925668a9","Type":"ContainerStarted","Data":"651e36dfd96e69c587204e8785938f53b87e331a411d73081afca2681c976bce"} Dec 09 14:18:19 crc kubenswrapper[5173]: I1209 14:18:19.003166 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lxpmn" podStartSLOduration=4.343348718 podStartE2EDuration="5.003146869s" podCreationTimestamp="2025-12-09 14:18:14 +0000 UTC" firstStartedPulling="2025-12-09 14:18:15.946427726 +0000 UTC m=+378.871709973" lastFinishedPulling="2025-12-09 14:18:16.606225887 +0000 UTC m=+379.531508124" observedRunningTime="2025-12-09 14:18:18.99769106 +0000 UTC m=+381.922973337" watchObservedRunningTime="2025-12-09 14:18:19.003146869 +0000 UTC m=+381.928429106" Dec 09 14:18:22 crc kubenswrapper[5173]: I1209 14:18:22.307021 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:22 crc kubenswrapper[5173]: I1209 14:18:22.307667 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:22 crc kubenswrapper[5173]: I1209 14:18:22.362229 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:22 crc kubenswrapper[5173]: I1209 14:18:22.499427 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:22 crc kubenswrapper[5173]: I1209 14:18:22.499928 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:22 crc kubenswrapper[5173]: I1209 14:18:22.539293 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:23 crc kubenswrapper[5173]: I1209 14:18:23.039132 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bcwrn" Dec 09 14:18:23 crc kubenswrapper[5173]: I1209 14:18:23.039184 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kqbwr" Dec 09 14:18:24 crc kubenswrapper[5173]: I1209 14:18:24.683765 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:24 crc kubenswrapper[5173]: I1209 14:18:24.683827 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:24 crc kubenswrapper[5173]: I1209 14:18:24.722743 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:24 crc kubenswrapper[5173]: I1209 14:18:24.897178 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:24 crc kubenswrapper[5173]: I1209 14:18:24.897231 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:24 crc kubenswrapper[5173]: I1209 14:18:24.953017 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:25 crc kubenswrapper[5173]: I1209 14:18:25.052591 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:18:25 crc kubenswrapper[5173]: I1209 14:18:25.061853 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lxpmn" Dec 09 14:18:49 crc kubenswrapper[5173]: I1209 14:18:49.092594 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:18:49 crc kubenswrapper[5173]: I1209 14:18:49.093464 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:19:19 crc kubenswrapper[5173]: I1209 14:19:19.086243 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:19:19 crc kubenswrapper[5173]: I1209 14:19:19.087110 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:19:49 crc kubenswrapper[5173]: I1209 14:19:49.084731 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:19:49 crc kubenswrapper[5173]: I1209 14:19:49.085336 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:19:49 crc kubenswrapper[5173]: I1209 14:19:49.085423 5173 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:19:49 crc kubenswrapper[5173]: I1209 14:19:49.086084 5173 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ad6833f3ca6b5e4f4c17ba91f6d8096243861b6d149a86087b4c5cd6377d00d"} pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:19:49 crc kubenswrapper[5173]: I1209 14:19:49.086172 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" containerID="cri-o://4ad6833f3ca6b5e4f4c17ba91f6d8096243861b6d149a86087b4c5cd6377d00d" gracePeriod=600 Dec 09 14:19:49 crc kubenswrapper[5173]: I1209 14:19:49.530695 5173 generic.go:358] "Generic (PLEG): container finished" podID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerID="4ad6833f3ca6b5e4f4c17ba91f6d8096243861b6d149a86087b4c5cd6377d00d" exitCode=0 Dec 09 14:19:49 crc kubenswrapper[5173]: I1209 14:19:49.530825 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerDied","Data":"4ad6833f3ca6b5e4f4c17ba91f6d8096243861b6d149a86087b4c5cd6377d00d"} Dec 09 14:19:49 crc kubenswrapper[5173]: I1209 14:19:49.531289 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerStarted","Data":"93d3de927b38141662865320582f39fe7791933ed43554feef05eeb6852d67b1"} Dec 09 14:19:49 crc kubenswrapper[5173]: I1209 14:19:49.531316 5173 scope.go:117] "RemoveContainer" containerID="7e585a8663ff5e2821ef163759a8486a08d59824ba49fa41e0d15200765ef763" Dec 09 14:20:58 crc kubenswrapper[5173]: I1209 14:20:58.053777 5173 scope.go:117] "RemoveContainer" containerID="788e07d69c3506f9073bef94ac28651de5d22cce528c3084cba445a1d7a4c103" Dec 09 14:20:58 crc kubenswrapper[5173]: I1209 14:20:58.092170 5173 scope.go:117] "RemoveContainer" containerID="551e5fd3f76f13ad4c61985070346c28c651245d542ffc9c1ae64922a22a18aa" Dec 09 14:20:58 crc kubenswrapper[5173]: I1209 14:20:58.111984 5173 scope.go:117] "RemoveContainer" containerID="a99b2ffc961cb8e257be6ee55c2c62d5b4f422e6c5c79fc8bd4f001988be50f0" Dec 09 14:21:49 crc kubenswrapper[5173]: I1209 14:21:49.085118 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:21:49 crc kubenswrapper[5173]: I1209 14:21:49.085752 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:21:58 crc kubenswrapper[5173]: I1209 14:21:58.436184 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:21:58 crc kubenswrapper[5173]: I1209 14:21:58.438801 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:22:11 crc kubenswrapper[5173]: I1209 14:22:11.693224 5173 ???:1] "http: TLS handshake error from 192.168.126.11:33074: no serving certificate available for the kubelet" Dec 09 14:22:19 crc kubenswrapper[5173]: I1209 14:22:19.084944 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:22:19 crc kubenswrapper[5173]: I1209 14:22:19.085417 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:22:49 crc kubenswrapper[5173]: I1209 14:22:49.084897 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:22:49 crc kubenswrapper[5173]: I1209 14:22:49.086032 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:22:49 crc kubenswrapper[5173]: I1209 14:22:49.086118 5173 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:22:49 crc kubenswrapper[5173]: I1209 14:22:49.087061 5173 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"93d3de927b38141662865320582f39fe7791933ed43554feef05eeb6852d67b1"} pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:22:49 crc kubenswrapper[5173]: I1209 14:22:49.087132 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" containerID="cri-o://93d3de927b38141662865320582f39fe7791933ed43554feef05eeb6852d67b1" gracePeriod=600 Dec 09 14:22:49 crc kubenswrapper[5173]: I1209 14:22:49.213953 5173 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:22:49 crc kubenswrapper[5173]: I1209 14:22:49.601768 5173 generic.go:358] "Generic (PLEG): container finished" podID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerID="93d3de927b38141662865320582f39fe7791933ed43554feef05eeb6852d67b1" exitCode=0 Dec 09 14:22:49 crc kubenswrapper[5173]: I1209 14:22:49.601858 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerDied","Data":"93d3de927b38141662865320582f39fe7791933ed43554feef05eeb6852d67b1"} Dec 09 14:22:49 crc kubenswrapper[5173]: I1209 14:22:49.602123 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerStarted","Data":"859cb3132f564d2a8f9a55f99e30a3d865a9afbcb1dbb53a0523762f86be0540"} Dec 09 14:22:49 crc kubenswrapper[5173]: I1209 14:22:49.602145 5173 scope.go:117] "RemoveContainer" containerID="4ad6833f3ca6b5e4f4c17ba91f6d8096243861b6d149a86087b4c5cd6377d00d" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.188586 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf"] Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.189528 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" podUID="07ddf926-e4f7-4486-920c-8d83fca5b4da" containerName="kube-rbac-proxy" containerID="cri-o://655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500" gracePeriod=30 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.189584 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" podUID="07ddf926-e4f7-4486-920c-8d83fca5b4da" containerName="ovnkube-cluster-manager" containerID="cri-o://8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52" gracePeriod=30 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.367986 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.398200 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc"] Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.398874 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07ddf926-e4f7-4486-920c-8d83fca5b4da" containerName="ovnkube-cluster-manager" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.398899 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ddf926-e4f7-4486-920c-8d83fca5b4da" containerName="ovnkube-cluster-manager" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.398918 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07ddf926-e4f7-4486-920c-8d83fca5b4da" containerName="kube-rbac-proxy" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.398926 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ddf926-e4f7-4486-920c-8d83fca5b4da" containerName="kube-rbac-proxy" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.399061 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="07ddf926-e4f7-4486-920c-8d83fca5b4da" containerName="kube-rbac-proxy" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.399090 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="07ddf926-e4f7-4486-920c-8d83fca5b4da" containerName="ovnkube-cluster-manager" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.403059 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.406382 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07ddf926-e4f7-4486-920c-8d83fca5b4da-ovnkube-config\") pod \"07ddf926-e4f7-4486-920c-8d83fca5b4da\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.406474 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07ddf926-e4f7-4486-920c-8d83fca5b4da-env-overrides\") pod \"07ddf926-e4f7-4486-920c-8d83fca5b4da\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.406499 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07ddf926-e4f7-4486-920c-8d83fca5b4da-ovn-control-plane-metrics-cert\") pod \"07ddf926-e4f7-4486-920c-8d83fca5b4da\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.406555 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdfcm\" (UniqueName: \"kubernetes.io/projected/07ddf926-e4f7-4486-920c-8d83fca5b4da-kube-api-access-mdfcm\") pod \"07ddf926-e4f7-4486-920c-8d83fca5b4da\" (UID: \"07ddf926-e4f7-4486-920c-8d83fca5b4da\") " Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.407280 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ddf926-e4f7-4486-920c-8d83fca5b4da-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "07ddf926-e4f7-4486-920c-8d83fca5b4da" (UID: "07ddf926-e4f7-4486-920c-8d83fca5b4da"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.407304 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ddf926-e4f7-4486-920c-8d83fca5b4da-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "07ddf926-e4f7-4486-920c-8d83fca5b4da" (UID: "07ddf926-e4f7-4486-920c-8d83fca5b4da"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.414934 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ddf926-e4f7-4486-920c-8d83fca5b4da-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "07ddf926-e4f7-4486-920c-8d83fca5b4da" (UID: "07ddf926-e4f7-4486-920c-8d83fca5b4da"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.415847 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ddf926-e4f7-4486-920c-8d83fca5b4da-kube-api-access-mdfcm" (OuterVolumeSpecName: "kube-api-access-mdfcm") pod "07ddf926-e4f7-4486-920c-8d83fca5b4da" (UID: "07ddf926-e4f7-4486-920c-8d83fca5b4da"). InnerVolumeSpecName "kube-api-access-mdfcm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.416533 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4hj6p"] Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.417105 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="ovn-controller" containerID="cri-o://5a539f9e884ee10f4a0bba7a7ce50dd95c423b36c196046435f791e15688e2a0" gracePeriod=30 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.417157 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="nbdb" containerID="cri-o://ddcdfec3ac8cf6eb937f71437b340c84242ca3a95a2a479d3c6ca13b5d99356a" gracePeriod=30 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.417183 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="sbdb" containerID="cri-o://958b3c42394f5bda4762c8a20b5ad6dc4de5947214d67c8de6fc2a7258ad7bb7" gracePeriod=30 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.417255 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b5e039f291824aa822dd101c3d3c69b2adcedd433290701fc050827ef9923511" gracePeriod=30 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.417322 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="kube-rbac-proxy-node" containerID="cri-o://86442f9b1ca071f4f9eed36a71a5a1a4955e732d9115098ab6d24b3cd800059c" gracePeriod=30 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.417131 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="northd" containerID="cri-o://3376a0f5a3173a5ec0c06f49feee9428d3596d3ecdaa8ec7fd1a9b782e0c3150" gracePeriod=30 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.417442 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="ovn-acl-logging" containerID="cri-o://acdb6f15d5b3a695e73fbb6481f04162b21ec33011cd0f275a5bff46a36788ca" gracePeriod=30 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.485809 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="ovnkube-controller" containerID="cri-o://4a2bb8cc7c7e031ab4de5e733d3571412a3459cbc73b22a27811071af61a5d3b" gracePeriod=30 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.508467 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5664d8f4-c4f5-48e7-8a02-1456ddce4ee2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-82ndc\" (UID: \"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.508687 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5664d8f4-c4f5-48e7-8a02-1456ddce4ee2-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-82ndc\" (UID: \"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.508813 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6gjf\" (UniqueName: \"kubernetes.io/projected/5664d8f4-c4f5-48e7-8a02-1456ddce4ee2-kube-api-access-b6gjf\") pod \"ovnkube-control-plane-97c9b6c48-82ndc\" (UID: \"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.508950 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5664d8f4-c4f5-48e7-8a02-1456ddce4ee2-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-82ndc\" (UID: \"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.509098 5173 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07ddf926-e4f7-4486-920c-8d83fca5b4da-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.509203 5173 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07ddf926-e4f7-4486-920c-8d83fca5b4da-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.509305 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mdfcm\" (UniqueName: \"kubernetes.io/projected/07ddf926-e4f7-4486-920c-8d83fca5b4da-kube-api-access-mdfcm\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.509398 5173 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07ddf926-e4f7-4486-920c-8d83fca5b4da-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.610934 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5664d8f4-c4f5-48e7-8a02-1456ddce4ee2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-82ndc\" (UID: \"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.611002 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5664d8f4-c4f5-48e7-8a02-1456ddce4ee2-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-82ndc\" (UID: \"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.611034 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b6gjf\" (UniqueName: \"kubernetes.io/projected/5664d8f4-c4f5-48e7-8a02-1456ddce4ee2-kube-api-access-b6gjf\") pod \"ovnkube-control-plane-97c9b6c48-82ndc\" (UID: \"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.611072 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5664d8f4-c4f5-48e7-8a02-1456ddce4ee2-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-82ndc\" (UID: \"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.611708 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5664d8f4-c4f5-48e7-8a02-1456ddce4ee2-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-82ndc\" (UID: \"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.612601 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5664d8f4-c4f5-48e7-8a02-1456ddce4ee2-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-82ndc\" (UID: \"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.617296 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5664d8f4-c4f5-48e7-8a02-1456ddce4ee2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-82ndc\" (UID: \"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.629103 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6gjf\" (UniqueName: \"kubernetes.io/projected/5664d8f4-c4f5-48e7-8a02-1456ddce4ee2-kube-api-access-b6gjf\") pod \"ovnkube-control-plane-97c9b6c48-82ndc\" (UID: \"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.775424 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4hj6p_49bec440-391d-48d9-9bc6-a14f40787067/ovn-acl-logging/0.log" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776324 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4hj6p_49bec440-391d-48d9-9bc6-a14f40787067/ovn-controller/0.log" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776653 5173 generic.go:358] "Generic (PLEG): container finished" podID="49bec440-391d-48d9-9bc6-a14f40787067" containerID="4a2bb8cc7c7e031ab4de5e733d3571412a3459cbc73b22a27811071af61a5d3b" exitCode=0 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776682 5173 generic.go:358] "Generic (PLEG): container finished" podID="49bec440-391d-48d9-9bc6-a14f40787067" containerID="958b3c42394f5bda4762c8a20b5ad6dc4de5947214d67c8de6fc2a7258ad7bb7" exitCode=0 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776689 5173 generic.go:358] "Generic (PLEG): container finished" podID="49bec440-391d-48d9-9bc6-a14f40787067" containerID="ddcdfec3ac8cf6eb937f71437b340c84242ca3a95a2a479d3c6ca13b5d99356a" exitCode=0 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776695 5173 generic.go:358] "Generic (PLEG): container finished" podID="49bec440-391d-48d9-9bc6-a14f40787067" containerID="3376a0f5a3173a5ec0c06f49feee9428d3596d3ecdaa8ec7fd1a9b782e0c3150" exitCode=0 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776700 5173 generic.go:358] "Generic (PLEG): container finished" podID="49bec440-391d-48d9-9bc6-a14f40787067" containerID="b5e039f291824aa822dd101c3d3c69b2adcedd433290701fc050827ef9923511" exitCode=0 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776706 5173 generic.go:358] "Generic (PLEG): container finished" podID="49bec440-391d-48d9-9bc6-a14f40787067" containerID="86442f9b1ca071f4f9eed36a71a5a1a4955e732d9115098ab6d24b3cd800059c" exitCode=0 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776711 5173 generic.go:358] "Generic (PLEG): container finished" podID="49bec440-391d-48d9-9bc6-a14f40787067" containerID="acdb6f15d5b3a695e73fbb6481f04162b21ec33011cd0f275a5bff46a36788ca" exitCode=143 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776716 5173 generic.go:358] "Generic (PLEG): container finished" podID="49bec440-391d-48d9-9bc6-a14f40787067" containerID="5a539f9e884ee10f4a0bba7a7ce50dd95c423b36c196046435f791e15688e2a0" exitCode=143 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776757 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerDied","Data":"4a2bb8cc7c7e031ab4de5e733d3571412a3459cbc73b22a27811071af61a5d3b"} Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776789 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerDied","Data":"958b3c42394f5bda4762c8a20b5ad6dc4de5947214d67c8de6fc2a7258ad7bb7"} Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776799 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerDied","Data":"ddcdfec3ac8cf6eb937f71437b340c84242ca3a95a2a479d3c6ca13b5d99356a"} Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776807 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerDied","Data":"3376a0f5a3173a5ec0c06f49feee9428d3596d3ecdaa8ec7fd1a9b782e0c3150"} Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776817 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerDied","Data":"b5e039f291824aa822dd101c3d3c69b2adcedd433290701fc050827ef9923511"} Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776826 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerDied","Data":"86442f9b1ca071f4f9eed36a71a5a1a4955e732d9115098ab6d24b3cd800059c"} Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776835 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerDied","Data":"acdb6f15d5b3a695e73fbb6481f04162b21ec33011cd0f275a5bff46a36788ca"} Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.776843 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerDied","Data":"5a539f9e884ee10f4a0bba7a7ce50dd95c423b36c196046435f791e15688e2a0"} Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.778111 5173 generic.go:358] "Generic (PLEG): container finished" podID="07ddf926-e4f7-4486-920c-8d83fca5b4da" containerID="8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52" exitCode=0 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.778130 5173 generic.go:358] "Generic (PLEG): container finished" podID="07ddf926-e4f7-4486-920c-8d83fca5b4da" containerID="655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500" exitCode=0 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.778189 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" event={"ID":"07ddf926-e4f7-4486-920c-8d83fca5b4da","Type":"ContainerDied","Data":"8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52"} Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.778208 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" event={"ID":"07ddf926-e4f7-4486-920c-8d83fca5b4da","Type":"ContainerDied","Data":"655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500"} Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.778221 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" event={"ID":"07ddf926-e4f7-4486-920c-8d83fca5b4da","Type":"ContainerDied","Data":"aec09d0b30d733986639f1dabb0a479287c8f17efd8e1b77e2d9a223494532e9"} Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.778241 5173 scope.go:117] "RemoveContainer" containerID="8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.778741 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.780450 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d24z7_a80ae74e-7470-4168-bdc1-454fa2137d7a/kube-multus/0.log" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.780480 5173 generic.go:358] "Generic (PLEG): container finished" podID="a80ae74e-7470-4168-bdc1-454fa2137d7a" containerID="f460a1644c18f7865af7796a312778249adc6d2e94346f6d2c914bd68f28e0d0" exitCode=2 Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.780578 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d24z7" event={"ID":"a80ae74e-7470-4168-bdc1-454fa2137d7a","Type":"ContainerDied","Data":"f460a1644c18f7865af7796a312778249adc6d2e94346f6d2c914bd68f28e0d0"} Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.782283 5173 scope.go:117] "RemoveContainer" containerID="f460a1644c18f7865af7796a312778249adc6d2e94346f6d2c914bd68f28e0d0" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.782805 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.852196 5173 scope.go:117] "RemoveContainer" containerID="655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.890581 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf"] Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.891603 5173 scope.go:117] "RemoveContainer" containerID="8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52" Dec 09 14:23:20 crc kubenswrapper[5173]: E1209 14:23:20.892129 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52\": container with ID starting with 8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52 not found: ID does not exist" containerID="8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.892173 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52"} err="failed to get container status \"8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52\": rpc error: code = NotFound desc = could not find container \"8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52\": container with ID starting with 8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52 not found: ID does not exist" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.892195 5173 scope.go:117] "RemoveContainer" containerID="655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500" Dec 09 14:23:20 crc kubenswrapper[5173]: E1209 14:23:20.892539 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500\": container with ID starting with 655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500 not found: ID does not exist" containerID="655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.892566 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500"} err="failed to get container status \"655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500\": rpc error: code = NotFound desc = could not find container \"655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500\": container with ID starting with 655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500 not found: ID does not exist" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.892578 5173 scope.go:117] "RemoveContainer" containerID="8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.892949 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52"} err="failed to get container status \"8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52\": rpc error: code = NotFound desc = could not find container \"8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52\": container with ID starting with 8cbb7454b17c14d4ae63732c1bf26a3d9fb4d91992eea22fdb2864488989ea52 not found: ID does not exist" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.892973 5173 scope.go:117] "RemoveContainer" containerID="655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.893256 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500"} err="failed to get container status \"655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500\": rpc error: code = NotFound desc = could not find container \"655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500\": container with ID starting with 655e405d03706655999705017179a4ca514d558395fec721a7b24e32d6e9e500 not found: ID does not exist" Dec 09 14:23:20 crc kubenswrapper[5173]: I1209 14:23:20.894103 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-srjbf"] Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.243122 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4hj6p_49bec440-391d-48d9-9bc6-a14f40787067/ovn-acl-logging/0.log" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.243597 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4hj6p_49bec440-391d-48d9-9bc6-a14f40787067/ovn-controller/0.log" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.244192 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.302873 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r89b2"] Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303670 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303695 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303710 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="sbdb" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303718 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="sbdb" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303747 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="ovnkube-controller" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303876 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="ovnkube-controller" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303939 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="kube-rbac-proxy-node" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303947 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="kube-rbac-proxy-node" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303959 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="ovn-acl-logging" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303966 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="ovn-acl-logging" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303980 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="northd" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303985 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="northd" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.303996 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="nbdb" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304002 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="nbdb" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304013 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="kubecfg-setup" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304019 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="kubecfg-setup" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304025 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="ovn-controller" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304031 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="ovn-controller" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304128 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="ovn-acl-logging" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304142 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="sbdb" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304154 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="ovn-controller" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304168 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="nbdb" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304177 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="northd" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304184 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="ovnkube-controller" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304191 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.304200 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="49bec440-391d-48d9-9bc6-a14f40787067" containerName="kube-rbac-proxy-node" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.317258 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321030 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-cni-bin\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321111 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-kubelet\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321140 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-etc-openvswitch\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321189 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-systemd-units\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321230 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-ovn\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321323 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-ovnkube-config\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321406 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-node-log\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321442 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-var-lib-openvswitch\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321464 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-log-socket\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321486 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-env-overrides\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321484 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321526 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-run-ovn-kubernetes\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321551 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321582 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-log-socket" (OuterVolumeSpecName: "log-socket") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321593 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-slash\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321656 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49bec440-391d-48d9-9bc6-a14f40787067-ovn-node-metrics-cert\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321640 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321677 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-cni-netd\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321684 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321727 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321640 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321740 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321755 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p5kj\" (UniqueName: \"kubernetes.io/projected/49bec440-391d-48d9-9bc6-a14f40787067-kube-api-access-5p5kj\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321764 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-slash" (OuterVolumeSpecName: "host-slash") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321795 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321796 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-systemd\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321863 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-run-netns\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321950 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-openvswitch\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.321977 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-ovnkube-script-lib\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.322016 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-var-lib-cni-networks-ovn-kubernetes\") pod \"49bec440-391d-48d9-9bc6-a14f40787067\" (UID: \"49bec440-391d-48d9-9bc6-a14f40787067\") " Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.322273 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.322332 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.322613 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.322628 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.322998 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-node-log" (OuterVolumeSpecName: "node-log") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.322663 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.322705 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.322807 5173 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.323077 5173 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.323093 5173 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.323107 5173 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.323120 5173 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.323131 5173 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.323148 5173 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.323161 5173 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.323174 5173 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-log-socket\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.323206 5173 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.323219 5173 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-slash\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.323231 5173 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.335015 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49bec440-391d-48d9-9bc6-a14f40787067-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.339817 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49bec440-391d-48d9-9bc6-a14f40787067-kube-api-access-5p5kj" (OuterVolumeSpecName: "kube-api-access-5p5kj") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "kube-api-access-5p5kj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.352645 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "49bec440-391d-48d9-9bc6-a14f40787067" (UID: "49bec440-391d-48d9-9bc6-a14f40787067"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.424926 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.424983 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1671f08c-436c-478b-ad5c-8e69ea2d9d62-ovnkube-config\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425012 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-run-openvswitch\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425036 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-var-lib-openvswitch\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425057 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-run-ovn-kubernetes\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425079 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1671f08c-436c-478b-ad5c-8e69ea2d9d62-ovn-node-metrics-cert\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425100 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-log-socket\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425128 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-etc-openvswitch\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425161 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-slash\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425192 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-kubelet\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425228 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1671f08c-436c-478b-ad5c-8e69ea2d9d62-ovnkube-script-lib\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425253 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-cni-bin\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425278 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-node-log\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425300 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lscf8\" (UniqueName: \"kubernetes.io/projected/1671f08c-436c-478b-ad5c-8e69ea2d9d62-kube-api-access-lscf8\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425328 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-systemd-units\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425379 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1671f08c-436c-478b-ad5c-8e69ea2d9d62-env-overrides\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425411 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-run-systemd\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425431 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-run-ovn\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425466 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-cni-netd\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425492 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-run-netns\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425550 5173 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425566 5173 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49bec440-391d-48d9-9bc6-a14f40787067-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425578 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5p5kj\" (UniqueName: \"kubernetes.io/projected/49bec440-391d-48d9-9bc6-a14f40787067-kube-api-access-5p5kj\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425589 5173 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425600 5173 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425626 5173 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425638 5173 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49bec440-391d-48d9-9bc6-a14f40787067-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.425649 5173 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49bec440-391d-48d9-9bc6-a14f40787067-node-log\") on node \"crc\" DevicePath \"\"" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526643 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1671f08c-436c-478b-ad5c-8e69ea2d9d62-env-overrides\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526698 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-run-systemd\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526717 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-run-ovn\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526745 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-cni-netd\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526771 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-run-netns\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526793 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-run-systemd\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526810 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526832 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1671f08c-436c-478b-ad5c-8e69ea2d9d62-ovnkube-config\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526849 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-cni-netd\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526860 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-run-openvswitch\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526885 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-var-lib-openvswitch\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526910 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-run-ovn-kubernetes\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526915 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526933 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1671f08c-436c-478b-ad5c-8e69ea2d9d62-ovn-node-metrics-cert\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526955 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-run-netns\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526957 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-log-socket\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526987 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-log-socket\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526997 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-etc-openvswitch\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.526886 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-run-ovn\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527025 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-run-ovn-kubernetes\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527070 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-var-lib-openvswitch\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527095 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-slash\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527130 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-kubelet\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527129 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-run-openvswitch\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527165 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1671f08c-436c-478b-ad5c-8e69ea2d9d62-ovnkube-script-lib\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527229 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-cni-bin\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527256 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-node-log\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527277 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lscf8\" (UniqueName: \"kubernetes.io/projected/1671f08c-436c-478b-ad5c-8e69ea2d9d62-kube-api-access-lscf8\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527305 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-systemd-units\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527432 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-systemd-units\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527449 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1671f08c-436c-478b-ad5c-8e69ea2d9d62-env-overrides\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527507 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-slash\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527549 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-kubelet\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527585 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-etc-openvswitch\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527627 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-node-log\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527659 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1671f08c-436c-478b-ad5c-8e69ea2d9d62-host-cni-bin\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.527730 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1671f08c-436c-478b-ad5c-8e69ea2d9d62-ovnkube-script-lib\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.528005 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1671f08c-436c-478b-ad5c-8e69ea2d9d62-ovnkube-config\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.535370 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1671f08c-436c-478b-ad5c-8e69ea2d9d62-ovn-node-metrics-cert\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.543078 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lscf8\" (UniqueName: \"kubernetes.io/projected/1671f08c-436c-478b-ad5c-8e69ea2d9d62-kube-api-access-lscf8\") pod \"ovnkube-node-r89b2\" (UID: \"1671f08c-436c-478b-ad5c-8e69ea2d9d62\") " pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.677271 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:21 crc kubenswrapper[5173]: W1209 14:23:21.695386 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1671f08c_436c_478b_ad5c_8e69ea2d9d62.slice/crio-7c5b49c9dc06a31fafcac41c985d6df68ed95c7d417f919b2c83ff1cd45c8949 WatchSource:0}: Error finding container 7c5b49c9dc06a31fafcac41c985d6df68ed95c7d417f919b2c83ff1cd45c8949: Status 404 returned error can't find the container with id 7c5b49c9dc06a31fafcac41c985d6df68ed95c7d417f919b2c83ff1cd45c8949 Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.788980 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" event={"ID":"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2","Type":"ContainerStarted","Data":"1d14aa65496f6bd45b510ded4fc816b24bcfd97016a3b2c7bee17dccf44741db"} Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.789030 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" event={"ID":"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2","Type":"ContainerStarted","Data":"8b1e7fbaa299611aee49fe5db2e97f2efc62bc1aea545e27c2426cfdab225eae"} Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.789044 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" event={"ID":"5664d8f4-c4f5-48e7-8a02-1456ddce4ee2","Type":"ContainerStarted","Data":"d28555a55f221b573cbc1a7e5eea17392f14ab904cd7f9806103c577038f00e6"} Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.792139 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d24z7_a80ae74e-7470-4168-bdc1-454fa2137d7a/kube-multus/0.log" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.792253 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d24z7" event={"ID":"a80ae74e-7470-4168-bdc1-454fa2137d7a","Type":"ContainerStarted","Data":"91ecce3be20eaea80eddaa302a450bf0a316041a1f98ecdfb9a8c6590590a59c"} Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.794708 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" event={"ID":"1671f08c-436c-478b-ad5c-8e69ea2d9d62","Type":"ContainerStarted","Data":"7c5b49c9dc06a31fafcac41c985d6df68ed95c7d417f919b2c83ff1cd45c8949"} Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.798010 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4hj6p_49bec440-391d-48d9-9bc6-a14f40787067/ovn-acl-logging/0.log" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.798504 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4hj6p_49bec440-391d-48d9-9bc6-a14f40787067/ovn-controller/0.log" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.799061 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" event={"ID":"49bec440-391d-48d9-9bc6-a14f40787067","Type":"ContainerDied","Data":"2f0e9c0d6183c1f4e13b7b4c20b32cc386f968dd4ca1323bb1e52b6123e35180"} Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.799102 5173 scope.go:117] "RemoveContainer" containerID="4a2bb8cc7c7e031ab4de5e733d3571412a3459cbc73b22a27811071af61a5d3b" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.799194 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4hj6p" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.807221 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-82ndc" podStartSLOduration=1.8071924799999999 podStartE2EDuration="1.80719248s" podCreationTimestamp="2025-12-09 14:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:23:21.806003503 +0000 UTC m=+684.731285760" watchObservedRunningTime="2025-12-09 14:23:21.80719248 +0000 UTC m=+684.732474767" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.880002 5173 scope.go:117] "RemoveContainer" containerID="958b3c42394f5bda4762c8a20b5ad6dc4de5947214d67c8de6fc2a7258ad7bb7" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.884983 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07ddf926-e4f7-4486-920c-8d83fca5b4da" path="/var/lib/kubelet/pods/07ddf926-e4f7-4486-920c-8d83fca5b4da/volumes" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.895575 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4hj6p"] Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.897208 5173 scope.go:117] "RemoveContainer" containerID="ddcdfec3ac8cf6eb937f71437b340c84242ca3a95a2a479d3c6ca13b5d99356a" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.902606 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4hj6p"] Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.916636 5173 scope.go:117] "RemoveContainer" containerID="3376a0f5a3173a5ec0c06f49feee9428d3596d3ecdaa8ec7fd1a9b782e0c3150" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.930379 5173 scope.go:117] "RemoveContainer" containerID="b5e039f291824aa822dd101c3d3c69b2adcedd433290701fc050827ef9923511" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.942632 5173 scope.go:117] "RemoveContainer" containerID="86442f9b1ca071f4f9eed36a71a5a1a4955e732d9115098ab6d24b3cd800059c" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.954183 5173 scope.go:117] "RemoveContainer" containerID="acdb6f15d5b3a695e73fbb6481f04162b21ec33011cd0f275a5bff46a36788ca" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.968764 5173 scope.go:117] "RemoveContainer" containerID="5a539f9e884ee10f4a0bba7a7ce50dd95c423b36c196046435f791e15688e2a0" Dec 09 14:23:21 crc kubenswrapper[5173]: I1209 14:23:21.981214 5173 scope.go:117] "RemoveContainer" containerID="231e33eb8ad573ef8c7345edad3d84a71079fc1f80d66033422174e4d361015f" Dec 09 14:23:22 crc kubenswrapper[5173]: I1209 14:23:22.807612 5173 generic.go:358] "Generic (PLEG): container finished" podID="1671f08c-436c-478b-ad5c-8e69ea2d9d62" containerID="4024fb95ac8c44129f7137b434da4a25237953bd063e6d8ab043766c85ac088f" exitCode=0 Dec 09 14:23:22 crc kubenswrapper[5173]: I1209 14:23:22.807712 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" event={"ID":"1671f08c-436c-478b-ad5c-8e69ea2d9d62","Type":"ContainerDied","Data":"4024fb95ac8c44129f7137b434da4a25237953bd063e6d8ab043766c85ac088f"} Dec 09 14:23:23 crc kubenswrapper[5173]: I1209 14:23:23.817100 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" event={"ID":"1671f08c-436c-478b-ad5c-8e69ea2d9d62","Type":"ContainerStarted","Data":"39bf7714c65ace99a2c4c0f2a2b374c4a96e88679dbda63d376eea58d6f93b71"} Dec 09 14:23:23 crc kubenswrapper[5173]: I1209 14:23:23.817443 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" event={"ID":"1671f08c-436c-478b-ad5c-8e69ea2d9d62","Type":"ContainerStarted","Data":"1c9344ce4268164251822c153378116fb6a57efd8e3afb74dc7892d86eb5322e"} Dec 09 14:23:23 crc kubenswrapper[5173]: I1209 14:23:23.817455 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" event={"ID":"1671f08c-436c-478b-ad5c-8e69ea2d9d62","Type":"ContainerStarted","Data":"4cdc4bdebb9f8e682b5f4a8c98f0fad985d38ee2a30757c983398b144c248c01"} Dec 09 14:23:23 crc kubenswrapper[5173]: I1209 14:23:23.817466 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" event={"ID":"1671f08c-436c-478b-ad5c-8e69ea2d9d62","Type":"ContainerStarted","Data":"ed70dfad382cffb88dcce756985aae7251b356fe09993619a582f8afdc4cc886"} Dec 09 14:23:23 crc kubenswrapper[5173]: I1209 14:23:23.817474 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" event={"ID":"1671f08c-436c-478b-ad5c-8e69ea2d9d62","Type":"ContainerStarted","Data":"0d8a1df44ed3ab88dab1aaa3b79226cca3dc54f9e0203b6a6132613e1304b02f"} Dec 09 14:23:23 crc kubenswrapper[5173]: I1209 14:23:23.817485 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" event={"ID":"1671f08c-436c-478b-ad5c-8e69ea2d9d62","Type":"ContainerStarted","Data":"9233b231d0af1f8cae69cf55b6aba8a90d16d676bff862fd97a43f2cc0c47ddf"} Dec 09 14:23:23 crc kubenswrapper[5173]: I1209 14:23:23.877178 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49bec440-391d-48d9-9bc6-a14f40787067" path="/var/lib/kubelet/pods/49bec440-391d-48d9-9bc6-a14f40787067/volumes" Dec 09 14:23:25 crc kubenswrapper[5173]: I1209 14:23:25.833917 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" event={"ID":"1671f08c-436c-478b-ad5c-8e69ea2d9d62","Type":"ContainerStarted","Data":"5106f0f9dd2811acd58df1513739cff36163772f155efc67e06689921967f1ad"} Dec 09 14:23:31 crc kubenswrapper[5173]: I1209 14:23:31.868869 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" event={"ID":"1671f08c-436c-478b-ad5c-8e69ea2d9d62","Type":"ContainerStarted","Data":"2da68cd2055043bf3f10061aee234628ec3fc25b63ec936ed6844c2cd23729d7"} Dec 09 14:23:31 crc kubenswrapper[5173]: I1209 14:23:31.876644 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:31 crc kubenswrapper[5173]: I1209 14:23:31.876955 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:31 crc kubenswrapper[5173]: I1209 14:23:31.901042 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" podStartSLOduration=10.901026325 podStartE2EDuration="10.901026325s" podCreationTimestamp="2025-12-09 14:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:23:31.898371491 +0000 UTC m=+694.823653748" watchObservedRunningTime="2025-12-09 14:23:31.901026325 +0000 UTC m=+694.826308572" Dec 09 14:23:31 crc kubenswrapper[5173]: I1209 14:23:31.904185 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:32 crc kubenswrapper[5173]: I1209 14:23:32.874965 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:23:32 crc kubenswrapper[5173]: I1209 14:23:32.898249 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:24:04 crc kubenswrapper[5173]: I1209 14:24:04.908922 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r89b2" Dec 09 14:24:35 crc kubenswrapper[5173]: I1209 14:24:35.114301 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdfwh"] Dec 09 14:24:35 crc kubenswrapper[5173]: I1209 14:24:35.115312 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sdfwh" podUID="f44083e3-315f-45c4-8753-b2196a9848a9" containerName="registry-server" containerID="cri-o://20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c" gracePeriod=30 Dec 09 14:24:35 crc kubenswrapper[5173]: I1209 14:24:35.451858 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:24:35 crc kubenswrapper[5173]: I1209 14:24:35.530614 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d64fz\" (UniqueName: \"kubernetes.io/projected/f44083e3-315f-45c4-8753-b2196a9848a9-kube-api-access-d64fz\") pod \"f44083e3-315f-45c4-8753-b2196a9848a9\" (UID: \"f44083e3-315f-45c4-8753-b2196a9848a9\") " Dec 09 14:24:35 crc kubenswrapper[5173]: I1209 14:24:35.531010 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f44083e3-315f-45c4-8753-b2196a9848a9-catalog-content\") pod \"f44083e3-315f-45c4-8753-b2196a9848a9\" (UID: \"f44083e3-315f-45c4-8753-b2196a9848a9\") " Dec 09 14:24:35 crc kubenswrapper[5173]: I1209 14:24:35.531183 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f44083e3-315f-45c4-8753-b2196a9848a9-utilities\") pod \"f44083e3-315f-45c4-8753-b2196a9848a9\" (UID: \"f44083e3-315f-45c4-8753-b2196a9848a9\") " Dec 09 14:24:35 crc kubenswrapper[5173]: I1209 14:24:35.532093 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f44083e3-315f-45c4-8753-b2196a9848a9-utilities" (OuterVolumeSpecName: "utilities") pod "f44083e3-315f-45c4-8753-b2196a9848a9" (UID: "f44083e3-315f-45c4-8753-b2196a9848a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:24:35 crc kubenswrapper[5173]: I1209 14:24:35.536701 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44083e3-315f-45c4-8753-b2196a9848a9-kube-api-access-d64fz" (OuterVolumeSpecName: "kube-api-access-d64fz") pod "f44083e3-315f-45c4-8753-b2196a9848a9" (UID: "f44083e3-315f-45c4-8753-b2196a9848a9"). InnerVolumeSpecName "kube-api-access-d64fz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:24:35 crc kubenswrapper[5173]: I1209 14:24:35.539464 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f44083e3-315f-45c4-8753-b2196a9848a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f44083e3-315f-45c4-8753-b2196a9848a9" (UID: "f44083e3-315f-45c4-8753-b2196a9848a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:24:35 crc kubenswrapper[5173]: I1209 14:24:35.632921 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d64fz\" (UniqueName: \"kubernetes.io/projected/f44083e3-315f-45c4-8753-b2196a9848a9-kube-api-access-d64fz\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:35 crc kubenswrapper[5173]: I1209 14:24:35.632958 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f44083e3-315f-45c4-8753-b2196a9848a9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:35 crc kubenswrapper[5173]: I1209 14:24:35.632966 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f44083e3-315f-45c4-8753-b2196a9848a9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.157069 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-56z8w"] Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.158056 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f44083e3-315f-45c4-8753-b2196a9848a9" containerName="registry-server" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.158073 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44083e3-315f-45c4-8753-b2196a9848a9" containerName="registry-server" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.158093 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f44083e3-315f-45c4-8753-b2196a9848a9" containerName="extract-utilities" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.158099 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44083e3-315f-45c4-8753-b2196a9848a9" containerName="extract-utilities" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.158125 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f44083e3-315f-45c4-8753-b2196a9848a9" containerName="extract-content" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.158130 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44083e3-315f-45c4-8753-b2196a9848a9" containerName="extract-content" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.158233 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="f44083e3-315f-45c4-8753-b2196a9848a9" containerName="registry-server" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.184511 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.184316 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-56z8w"] Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.227156 5173 generic.go:358] "Generic (PLEG): container finished" podID="f44083e3-315f-45c4-8753-b2196a9848a9" containerID="20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c" exitCode=0 Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.227195 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdfwh" event={"ID":"f44083e3-315f-45c4-8753-b2196a9848a9","Type":"ContainerDied","Data":"20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c"} Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.227221 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdfwh" event={"ID":"f44083e3-315f-45c4-8753-b2196a9848a9","Type":"ContainerDied","Data":"9f17405e098dbb51bf199d62fb89fefb7548d8ddda24afb4338b969c7ba273be"} Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.227240 5173 scope.go:117] "RemoveContainer" containerID="20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.227426 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdfwh" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.250524 5173 scope.go:117] "RemoveContainer" containerID="c62c560c9e8f734ab12defa07b6ee02ef98284cda460c7b16d090111573b78d7" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.255601 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdfwh"] Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.261963 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdfwh"] Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.265992 5173 scope.go:117] "RemoveContainer" containerID="085782f8c47de5fb97a1412d0bcf61e7d174866191071adb82db8c31c1c65d71" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.282546 5173 scope.go:117] "RemoveContainer" containerID="20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c" Dec 09 14:24:36 crc kubenswrapper[5173]: E1209 14:24:36.283033 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c\": container with ID starting with 20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c not found: ID does not exist" containerID="20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.283077 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c"} err="failed to get container status \"20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c\": rpc error: code = NotFound desc = could not find container \"20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c\": container with ID starting with 20f0c6f67219bed2a8712470b764beaa4fa4a38d8edb5c7ae93519d9ab8ca98c not found: ID does not exist" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.283108 5173 scope.go:117] "RemoveContainer" containerID="c62c560c9e8f734ab12defa07b6ee02ef98284cda460c7b16d090111573b78d7" Dec 09 14:24:36 crc kubenswrapper[5173]: E1209 14:24:36.283440 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62c560c9e8f734ab12defa07b6ee02ef98284cda460c7b16d090111573b78d7\": container with ID starting with c62c560c9e8f734ab12defa07b6ee02ef98284cda460c7b16d090111573b78d7 not found: ID does not exist" containerID="c62c560c9e8f734ab12defa07b6ee02ef98284cda460c7b16d090111573b78d7" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.283471 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62c560c9e8f734ab12defa07b6ee02ef98284cda460c7b16d090111573b78d7"} err="failed to get container status \"c62c560c9e8f734ab12defa07b6ee02ef98284cda460c7b16d090111573b78d7\": rpc error: code = NotFound desc = could not find container \"c62c560c9e8f734ab12defa07b6ee02ef98284cda460c7b16d090111573b78d7\": container with ID starting with c62c560c9e8f734ab12defa07b6ee02ef98284cda460c7b16d090111573b78d7 not found: ID does not exist" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.283494 5173 scope.go:117] "RemoveContainer" containerID="085782f8c47de5fb97a1412d0bcf61e7d174866191071adb82db8c31c1c65d71" Dec 09 14:24:36 crc kubenswrapper[5173]: E1209 14:24:36.283802 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"085782f8c47de5fb97a1412d0bcf61e7d174866191071adb82db8c31c1c65d71\": container with ID starting with 085782f8c47de5fb97a1412d0bcf61e7d174866191071adb82db8c31c1c65d71 not found: ID does not exist" containerID="085782f8c47de5fb97a1412d0bcf61e7d174866191071adb82db8c31c1c65d71" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.283827 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"085782f8c47de5fb97a1412d0bcf61e7d174866191071adb82db8c31c1c65d71"} err="failed to get container status \"085782f8c47de5fb97a1412d0bcf61e7d174866191071adb82db8c31c1c65d71\": rpc error: code = NotFound desc = could not find container \"085782f8c47de5fb97a1412d0bcf61e7d174866191071adb82db8c31c1c65d71\": container with ID starting with 085782f8c47de5fb97a1412d0bcf61e7d174866191071adb82db8c31c1c65d71 not found: ID does not exist" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.340911 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50c2789f-41db-48f3-9385-2baf6da7e899-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.340958 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50c2789f-41db-48f3-9385-2baf6da7e899-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.341004 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.341049 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50c2789f-41db-48f3-9385-2baf6da7e899-trusted-ca\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.341317 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50c2789f-41db-48f3-9385-2baf6da7e899-bound-sa-token\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.341387 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50c2789f-41db-48f3-9385-2baf6da7e899-registry-tls\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.341422 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksshm\" (UniqueName: \"kubernetes.io/projected/50c2789f-41db-48f3-9385-2baf6da7e899-kube-api-access-ksshm\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.341474 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50c2789f-41db-48f3-9385-2baf6da7e899-registry-certificates\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.359991 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.443000 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50c2789f-41db-48f3-9385-2baf6da7e899-registry-certificates\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.443057 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50c2789f-41db-48f3-9385-2baf6da7e899-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.443109 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50c2789f-41db-48f3-9385-2baf6da7e899-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.443160 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50c2789f-41db-48f3-9385-2baf6da7e899-trusted-ca\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.443216 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50c2789f-41db-48f3-9385-2baf6da7e899-bound-sa-token\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.443238 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50c2789f-41db-48f3-9385-2baf6da7e899-registry-tls\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.443259 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ksshm\" (UniqueName: \"kubernetes.io/projected/50c2789f-41db-48f3-9385-2baf6da7e899-kube-api-access-ksshm\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.444188 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50c2789f-41db-48f3-9385-2baf6da7e899-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.444787 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50c2789f-41db-48f3-9385-2baf6da7e899-trusted-ca\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.446230 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50c2789f-41db-48f3-9385-2baf6da7e899-registry-certificates\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.449163 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50c2789f-41db-48f3-9385-2baf6da7e899-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.449300 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50c2789f-41db-48f3-9385-2baf6da7e899-registry-tls\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.459464 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50c2789f-41db-48f3-9385-2baf6da7e899-bound-sa-token\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.459887 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksshm\" (UniqueName: \"kubernetes.io/projected/50c2789f-41db-48f3-9385-2baf6da7e899-kube-api-access-ksshm\") pod \"image-registry-5d9d95bf5b-56z8w\" (UID: \"50c2789f-41db-48f3-9385-2baf6da7e899\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.535183 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:36 crc kubenswrapper[5173]: I1209 14:24:36.707334 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-56z8w"] Dec 09 14:24:37 crc kubenswrapper[5173]: I1209 14:24:37.233392 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" event={"ID":"50c2789f-41db-48f3-9385-2baf6da7e899","Type":"ContainerStarted","Data":"dd48b21e1c490af149a57e393bb40c407fa0a2a9f45d814374a00d5691257685"} Dec 09 14:24:37 crc kubenswrapper[5173]: I1209 14:24:37.233697 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:37 crc kubenswrapper[5173]: I1209 14:24:37.233716 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" event={"ID":"50c2789f-41db-48f3-9385-2baf6da7e899","Type":"ContainerStarted","Data":"5c88f3c82757cb70cba54708d4f281145869be5ba92bfd9e3b7de9e359257f2e"} Dec 09 14:24:37 crc kubenswrapper[5173]: I1209 14:24:37.254761 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" podStartSLOduration=1.25474371 podStartE2EDuration="1.25474371s" podCreationTimestamp="2025-12-09 14:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:24:37.251753846 +0000 UTC m=+760.177036103" watchObservedRunningTime="2025-12-09 14:24:37.25474371 +0000 UTC m=+760.180025947" Dec 09 14:24:37 crc kubenswrapper[5173]: I1209 14:24:37.877683 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44083e3-315f-45c4-8753-b2196a9848a9" path="/var/lib/kubelet/pods/f44083e3-315f-45c4-8753-b2196a9848a9/volumes" Dec 09 14:24:38 crc kubenswrapper[5173]: I1209 14:24:38.759504 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs"] Dec 09 14:24:38 crc kubenswrapper[5173]: I1209 14:24:38.770322 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs"] Dec 09 14:24:38 crc kubenswrapper[5173]: I1209 14:24:38.770642 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:38 crc kubenswrapper[5173]: I1209 14:24:38.772632 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 09 14:24:38 crc kubenswrapper[5173]: I1209 14:24:38.873746 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/099ae84d-622d-4ff6-99b3-9b50797e4e8a-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs\" (UID: \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:38 crc kubenswrapper[5173]: I1209 14:24:38.874086 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjv9c\" (UniqueName: \"kubernetes.io/projected/099ae84d-622d-4ff6-99b3-9b50797e4e8a-kube-api-access-bjv9c\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs\" (UID: \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:38 crc kubenswrapper[5173]: I1209 14:24:38.874283 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/099ae84d-622d-4ff6-99b3-9b50797e4e8a-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs\" (UID: \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:38 crc kubenswrapper[5173]: I1209 14:24:38.975638 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/099ae84d-622d-4ff6-99b3-9b50797e4e8a-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs\" (UID: \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:38 crc kubenswrapper[5173]: I1209 14:24:38.976152 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/099ae84d-622d-4ff6-99b3-9b50797e4e8a-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs\" (UID: \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:38 crc kubenswrapper[5173]: I1209 14:24:38.976594 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/099ae84d-622d-4ff6-99b3-9b50797e4e8a-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs\" (UID: \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:38 crc kubenswrapper[5173]: I1209 14:24:38.976279 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/099ae84d-622d-4ff6-99b3-9b50797e4e8a-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs\" (UID: \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:38 crc kubenswrapper[5173]: I1209 14:24:38.976707 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bjv9c\" (UniqueName: \"kubernetes.io/projected/099ae84d-622d-4ff6-99b3-9b50797e4e8a-kube-api-access-bjv9c\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs\" (UID: \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:39 crc kubenswrapper[5173]: I1209 14:24:39.001535 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjv9c\" (UniqueName: \"kubernetes.io/projected/099ae84d-622d-4ff6-99b3-9b50797e4e8a-kube-api-access-bjv9c\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs\" (UID: \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:39 crc kubenswrapper[5173]: I1209 14:24:39.096540 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:39 crc kubenswrapper[5173]: I1209 14:24:39.284108 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs"] Dec 09 14:24:39 crc kubenswrapper[5173]: W1209 14:24:39.293537 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod099ae84d_622d_4ff6_99b3_9b50797e4e8a.slice/crio-d8720dc839df782ebccae572eb6faa8f352d8ae5b751e4c0b1b621539eefc151 WatchSource:0}: Error finding container d8720dc839df782ebccae572eb6faa8f352d8ae5b751e4c0b1b621539eefc151: Status 404 returned error can't find the container with id d8720dc839df782ebccae572eb6faa8f352d8ae5b751e4c0b1b621539eefc151 Dec 09 14:24:40 crc kubenswrapper[5173]: I1209 14:24:40.260389 5173 generic.go:358] "Generic (PLEG): container finished" podID="099ae84d-622d-4ff6-99b3-9b50797e4e8a" containerID="57fa3a174c09d307ad471e0dfb5d3c040c04d49b19c0944fc437560d369f3421" exitCode=0 Dec 09 14:24:40 crc kubenswrapper[5173]: I1209 14:24:40.260730 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" event={"ID":"099ae84d-622d-4ff6-99b3-9b50797e4e8a","Type":"ContainerDied","Data":"57fa3a174c09d307ad471e0dfb5d3c040c04d49b19c0944fc437560d369f3421"} Dec 09 14:24:40 crc kubenswrapper[5173]: I1209 14:24:40.260970 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" event={"ID":"099ae84d-622d-4ff6-99b3-9b50797e4e8a","Type":"ContainerStarted","Data":"d8720dc839df782ebccae572eb6faa8f352d8ae5b751e4c0b1b621539eefc151"} Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.269254 5173 generic.go:358] "Generic (PLEG): container finished" podID="099ae84d-622d-4ff6-99b3-9b50797e4e8a" containerID="0ef710ff5c3dc869d99d057a32ae495d50205bae528323ba33c4b45132f491d7" exitCode=0 Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.269423 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" event={"ID":"099ae84d-622d-4ff6-99b3-9b50797e4e8a","Type":"ContainerDied","Data":"0ef710ff5c3dc869d99d057a32ae495d50205bae528323ba33c4b45132f491d7"} Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.514927 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dkd82"] Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.521662 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.542120 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dkd82"] Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.615087 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0a7a74-d3a8-4278-9831-91fdd079f449-catalog-content\") pod \"redhat-operators-dkd82\" (UID: \"fa0a7a74-d3a8-4278-9831-91fdd079f449\") " pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.615170 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0a7a74-d3a8-4278-9831-91fdd079f449-utilities\") pod \"redhat-operators-dkd82\" (UID: \"fa0a7a74-d3a8-4278-9831-91fdd079f449\") " pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.615211 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nftqf\" (UniqueName: \"kubernetes.io/projected/fa0a7a74-d3a8-4278-9831-91fdd079f449-kube-api-access-nftqf\") pod \"redhat-operators-dkd82\" (UID: \"fa0a7a74-d3a8-4278-9831-91fdd079f449\") " pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.717105 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0a7a74-d3a8-4278-9831-91fdd079f449-catalog-content\") pod \"redhat-operators-dkd82\" (UID: \"fa0a7a74-d3a8-4278-9831-91fdd079f449\") " pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.717178 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0a7a74-d3a8-4278-9831-91fdd079f449-utilities\") pod \"redhat-operators-dkd82\" (UID: \"fa0a7a74-d3a8-4278-9831-91fdd079f449\") " pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.717240 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nftqf\" (UniqueName: \"kubernetes.io/projected/fa0a7a74-d3a8-4278-9831-91fdd079f449-kube-api-access-nftqf\") pod \"redhat-operators-dkd82\" (UID: \"fa0a7a74-d3a8-4278-9831-91fdd079f449\") " pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.717610 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0a7a74-d3a8-4278-9831-91fdd079f449-catalog-content\") pod \"redhat-operators-dkd82\" (UID: \"fa0a7a74-d3a8-4278-9831-91fdd079f449\") " pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.717839 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0a7a74-d3a8-4278-9831-91fdd079f449-utilities\") pod \"redhat-operators-dkd82\" (UID: \"fa0a7a74-d3a8-4278-9831-91fdd079f449\") " pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.740073 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nftqf\" (UniqueName: \"kubernetes.io/projected/fa0a7a74-d3a8-4278-9831-91fdd079f449-kube-api-access-nftqf\") pod \"redhat-operators-dkd82\" (UID: \"fa0a7a74-d3a8-4278-9831-91fdd079f449\") " pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:41 crc kubenswrapper[5173]: I1209 14:24:41.856100 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:42 crc kubenswrapper[5173]: I1209 14:24:42.096874 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dkd82"] Dec 09 14:24:42 crc kubenswrapper[5173]: I1209 14:24:42.279612 5173 generic.go:358] "Generic (PLEG): container finished" podID="099ae84d-622d-4ff6-99b3-9b50797e4e8a" containerID="f26af14c604d59ba1b22009ca7a34912b2c227df2233edc18d7ade267a0f9895" exitCode=0 Dec 09 14:24:42 crc kubenswrapper[5173]: I1209 14:24:42.279911 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" event={"ID":"099ae84d-622d-4ff6-99b3-9b50797e4e8a","Type":"ContainerDied","Data":"f26af14c604d59ba1b22009ca7a34912b2c227df2233edc18d7ade267a0f9895"} Dec 09 14:24:42 crc kubenswrapper[5173]: I1209 14:24:42.282946 5173 generic.go:358] "Generic (PLEG): container finished" podID="fa0a7a74-d3a8-4278-9831-91fdd079f449" containerID="7191d8b7f8840bc0c09f58c44ef6e62d6acf5021a2e59a2de8cfe5d2bb2f7fb2" exitCode=0 Dec 09 14:24:42 crc kubenswrapper[5173]: I1209 14:24:42.283190 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkd82" event={"ID":"fa0a7a74-d3a8-4278-9831-91fdd079f449","Type":"ContainerDied","Data":"7191d8b7f8840bc0c09f58c44ef6e62d6acf5021a2e59a2de8cfe5d2bb2f7fb2"} Dec 09 14:24:42 crc kubenswrapper[5173]: I1209 14:24:42.283500 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkd82" event={"ID":"fa0a7a74-d3a8-4278-9831-91fdd079f449","Type":"ContainerStarted","Data":"d1cc5daa2e90ad97fb672b6052df94c0866ceb2d62c148a8846ae4bd9dc7cd2a"} Dec 09 14:24:43 crc kubenswrapper[5173]: I1209 14:24:43.291720 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkd82" event={"ID":"fa0a7a74-d3a8-4278-9831-91fdd079f449","Type":"ContainerStarted","Data":"2a553f9703db7b043b7ff51c4eac01f0f7871258543a7218ae860e348eaad21c"} Dec 09 14:24:43 crc kubenswrapper[5173]: I1209 14:24:43.538395 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:43 crc kubenswrapper[5173]: I1209 14:24:43.642045 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjv9c\" (UniqueName: \"kubernetes.io/projected/099ae84d-622d-4ff6-99b3-9b50797e4e8a-kube-api-access-bjv9c\") pod \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\" (UID: \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\") " Dec 09 14:24:43 crc kubenswrapper[5173]: I1209 14:24:43.642178 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/099ae84d-622d-4ff6-99b3-9b50797e4e8a-bundle\") pod \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\" (UID: \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\") " Dec 09 14:24:43 crc kubenswrapper[5173]: I1209 14:24:43.642280 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/099ae84d-622d-4ff6-99b3-9b50797e4e8a-util\") pod \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\" (UID: \"099ae84d-622d-4ff6-99b3-9b50797e4e8a\") " Dec 09 14:24:43 crc kubenswrapper[5173]: I1209 14:24:43.644527 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/099ae84d-622d-4ff6-99b3-9b50797e4e8a-bundle" (OuterVolumeSpecName: "bundle") pod "099ae84d-622d-4ff6-99b3-9b50797e4e8a" (UID: "099ae84d-622d-4ff6-99b3-9b50797e4e8a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:24:43 crc kubenswrapper[5173]: I1209 14:24:43.654633 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/099ae84d-622d-4ff6-99b3-9b50797e4e8a-util" (OuterVolumeSpecName: "util") pod "099ae84d-622d-4ff6-99b3-9b50797e4e8a" (UID: "099ae84d-622d-4ff6-99b3-9b50797e4e8a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:24:43 crc kubenswrapper[5173]: I1209 14:24:43.698851 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099ae84d-622d-4ff6-99b3-9b50797e4e8a-kube-api-access-bjv9c" (OuterVolumeSpecName: "kube-api-access-bjv9c") pod "099ae84d-622d-4ff6-99b3-9b50797e4e8a" (UID: "099ae84d-622d-4ff6-99b3-9b50797e4e8a"). InnerVolumeSpecName "kube-api-access-bjv9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:24:43 crc kubenswrapper[5173]: I1209 14:24:43.744129 5173 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/099ae84d-622d-4ff6-99b3-9b50797e4e8a-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:43 crc kubenswrapper[5173]: I1209 14:24:43.744170 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bjv9c\" (UniqueName: \"kubernetes.io/projected/099ae84d-622d-4ff6-99b3-9b50797e4e8a-kube-api-access-bjv9c\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:43 crc kubenswrapper[5173]: I1209 14:24:43.744187 5173 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/099ae84d-622d-4ff6-99b3-9b50797e4e8a-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:44 crc kubenswrapper[5173]: I1209 14:24:44.300812 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" Dec 09 14:24:44 crc kubenswrapper[5173]: I1209 14:24:44.301223 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210brlhs" event={"ID":"099ae84d-622d-4ff6-99b3-9b50797e4e8a","Type":"ContainerDied","Data":"d8720dc839df782ebccae572eb6faa8f352d8ae5b751e4c0b1b621539eefc151"} Dec 09 14:24:44 crc kubenswrapper[5173]: I1209 14:24:44.301245 5173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8720dc839df782ebccae572eb6faa8f352d8ae5b751e4c0b1b621539eefc151" Dec 09 14:24:45 crc kubenswrapper[5173]: I1209 14:24:45.321058 5173 generic.go:358] "Generic (PLEG): container finished" podID="fa0a7a74-d3a8-4278-9831-91fdd079f449" containerID="2a553f9703db7b043b7ff51c4eac01f0f7871258543a7218ae860e348eaad21c" exitCode=0 Dec 09 14:24:45 crc kubenswrapper[5173]: I1209 14:24:45.321142 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkd82" event={"ID":"fa0a7a74-d3a8-4278-9831-91fdd079f449","Type":"ContainerDied","Data":"2a553f9703db7b043b7ff51c4eac01f0f7871258543a7218ae860e348eaad21c"} Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.151778 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt"] Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.152309 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="099ae84d-622d-4ff6-99b3-9b50797e4e8a" containerName="pull" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.152326 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="099ae84d-622d-4ff6-99b3-9b50797e4e8a" containerName="pull" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.152335 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="099ae84d-622d-4ff6-99b3-9b50797e4e8a" containerName="extract" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.152341 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="099ae84d-622d-4ff6-99b3-9b50797e4e8a" containerName="extract" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.152363 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="099ae84d-622d-4ff6-99b3-9b50797e4e8a" containerName="util" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.152372 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="099ae84d-622d-4ff6-99b3-9b50797e4e8a" containerName="util" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.152478 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="099ae84d-622d-4ff6-99b3-9b50797e4e8a" containerName="extract" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.181847 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt"] Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.182010 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.184893 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.186242 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7m29\" (UniqueName: \"kubernetes.io/projected/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-kube-api-access-n7m29\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt\" (UID: \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.186318 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt\" (UID: \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.186461 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt\" (UID: \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.287242 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt\" (UID: \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.287532 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt\" (UID: \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.287809 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt\" (UID: \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.287934 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt\" (UID: \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.288209 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n7m29\" (UniqueName: \"kubernetes.io/projected/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-kube-api-access-n7m29\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt\" (UID: \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.307705 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7m29\" (UniqueName: \"kubernetes.io/projected/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-kube-api-access-n7m29\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt\" (UID: \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.327709 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkd82" event={"ID":"fa0a7a74-d3a8-4278-9831-91fdd079f449","Type":"ContainerStarted","Data":"4215c7a07c3e1b909cbb411ae2640aeac348319f6d1becb2f9a08d4226b2f2a7"} Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.345459 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dkd82" podStartSLOduration=4.619511933 podStartE2EDuration="5.345442339s" podCreationTimestamp="2025-12-09 14:24:41 +0000 UTC" firstStartedPulling="2025-12-09 14:24:42.284385109 +0000 UTC m=+765.209667356" lastFinishedPulling="2025-12-09 14:24:43.010315495 +0000 UTC m=+765.935597762" observedRunningTime="2025-12-09 14:24:46.343317441 +0000 UTC m=+769.268599708" watchObservedRunningTime="2025-12-09 14:24:46.345442339 +0000 UTC m=+769.270724586" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.496332 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.715934 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt"] Dec 09 14:24:46 crc kubenswrapper[5173]: W1209 14:24:46.722703 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf00eb154_1469_44bb_bf3c_fecdfabc2a7f.slice/crio-b73f31230e799c5d7a413bbe9dac4b68ae0872ea670d6d2ef054802ff24f804a WatchSource:0}: Error finding container b73f31230e799c5d7a413bbe9dac4b68ae0872ea670d6d2ef054802ff24f804a: Status 404 returned error can't find the container with id b73f31230e799c5d7a413bbe9dac4b68ae0872ea670d6d2ef054802ff24f804a Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.971597 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr"] Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.979146 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr"] Dec 09 14:24:46 crc kubenswrapper[5173]: I1209 14:24:46.979283 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.001192 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d182cae-1cf2-46b4-accb-db755e8d7f16-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr\" (UID: \"9d182cae-1cf2-46b4-accb-db755e8d7f16\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.001270 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d182cae-1cf2-46b4-accb-db755e8d7f16-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr\" (UID: \"9d182cae-1cf2-46b4-accb-db755e8d7f16\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.001397 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mvln\" (UniqueName: \"kubernetes.io/projected/9d182cae-1cf2-46b4-accb-db755e8d7f16-kube-api-access-5mvln\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr\" (UID: \"9d182cae-1cf2-46b4-accb-db755e8d7f16\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.104180 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5mvln\" (UniqueName: \"kubernetes.io/projected/9d182cae-1cf2-46b4-accb-db755e8d7f16-kube-api-access-5mvln\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr\" (UID: \"9d182cae-1cf2-46b4-accb-db755e8d7f16\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.104264 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d182cae-1cf2-46b4-accb-db755e8d7f16-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr\" (UID: \"9d182cae-1cf2-46b4-accb-db755e8d7f16\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.104316 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d182cae-1cf2-46b4-accb-db755e8d7f16-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr\" (UID: \"9d182cae-1cf2-46b4-accb-db755e8d7f16\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.104962 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d182cae-1cf2-46b4-accb-db755e8d7f16-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr\" (UID: \"9d182cae-1cf2-46b4-accb-db755e8d7f16\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.104985 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d182cae-1cf2-46b4-accb-db755e8d7f16-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr\" (UID: \"9d182cae-1cf2-46b4-accb-db755e8d7f16\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.124552 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mvln\" (UniqueName: \"kubernetes.io/projected/9d182cae-1cf2-46b4-accb-db755e8d7f16-kube-api-access-5mvln\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr\" (UID: \"9d182cae-1cf2-46b4-accb-db755e8d7f16\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.302501 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.334632 5173 generic.go:358] "Generic (PLEG): container finished" podID="f00eb154-1469-44bb-bf3c-fecdfabc2a7f" containerID="5e5e19ca80fddd889f7c54cb3869d4072cf8c096cc396f6607e163f712708f9b" exitCode=0 Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.334735 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" event={"ID":"f00eb154-1469-44bb-bf3c-fecdfabc2a7f","Type":"ContainerDied","Data":"5e5e19ca80fddd889f7c54cb3869d4072cf8c096cc396f6607e163f712708f9b"} Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.334757 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" event={"ID":"f00eb154-1469-44bb-bf3c-fecdfabc2a7f","Type":"ContainerStarted","Data":"b73f31230e799c5d7a413bbe9dac4b68ae0872ea670d6d2ef054802ff24f804a"} Dec 09 14:24:47 crc kubenswrapper[5173]: I1209 14:24:47.750408 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr"] Dec 09 14:24:48 crc kubenswrapper[5173]: I1209 14:24:48.348928 5173 generic.go:358] "Generic (PLEG): container finished" podID="9d182cae-1cf2-46b4-accb-db755e8d7f16" containerID="3312b5b9ded5760169bdd7c17a9d13dd33dc7dcbcece9c7d60cf77c1413f537b" exitCode=0 Dec 09 14:24:48 crc kubenswrapper[5173]: I1209 14:24:48.349056 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" event={"ID":"9d182cae-1cf2-46b4-accb-db755e8d7f16","Type":"ContainerDied","Data":"3312b5b9ded5760169bdd7c17a9d13dd33dc7dcbcece9c7d60cf77c1413f537b"} Dec 09 14:24:48 crc kubenswrapper[5173]: I1209 14:24:48.349142 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" event={"ID":"9d182cae-1cf2-46b4-accb-db755e8d7f16","Type":"ContainerStarted","Data":"d1eec721e1880a018f16311e64b362d1eadb7cffdcbcfc4cfc283e9887d0476a"} Dec 09 14:24:49 crc kubenswrapper[5173]: I1209 14:24:49.085401 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:24:49 crc kubenswrapper[5173]: I1209 14:24:49.085461 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:24:49 crc kubenswrapper[5173]: I1209 14:24:49.356322 5173 generic.go:358] "Generic (PLEG): container finished" podID="f00eb154-1469-44bb-bf3c-fecdfabc2a7f" containerID="87b0de6684984e2a6c567983313d81a2be736fa38e7d6c92135d62007f584b0c" exitCode=0 Dec 09 14:24:49 crc kubenswrapper[5173]: I1209 14:24:49.356412 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" event={"ID":"f00eb154-1469-44bb-bf3c-fecdfabc2a7f","Type":"ContainerDied","Data":"87b0de6684984e2a6c567983313d81a2be736fa38e7d6c92135d62007f584b0c"} Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.102560 5173 generic.go:358] "Generic (PLEG): container finished" podID="9d182cae-1cf2-46b4-accb-db755e8d7f16" containerID="f4ea7ee8cab0a490c3a2302765116e4b6b6916086c56e01182b2fc8e9cdf91e1" exitCode=0 Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.102976 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" event={"ID":"9d182cae-1cf2-46b4-accb-db755e8d7f16","Type":"ContainerDied","Data":"f4ea7ee8cab0a490c3a2302765116e4b6b6916086c56e01182b2fc8e9cdf91e1"} Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.109998 5173 generic.go:358] "Generic (PLEG): container finished" podID="f00eb154-1469-44bb-bf3c-fecdfabc2a7f" containerID="86b5ea8c570d8f5d13760a485c1e29fd8bcff2545ce9ad0aa20def580cc0c451" exitCode=0 Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.110218 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" event={"ID":"f00eb154-1469-44bb-bf3c-fecdfabc2a7f","Type":"ContainerDied","Data":"86b5ea8c570d8f5d13760a485c1e29fd8bcff2545ce9ad0aa20def580cc0c451"} Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.172129 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p89mp"] Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.180917 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.201732 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p89mp"] Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.296218 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8883851f-49c8-4275-a8b5-90f065c14dbd-catalog-content\") pod \"certified-operators-p89mp\" (UID: \"8883851f-49c8-4275-a8b5-90f065c14dbd\") " pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.296499 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8883851f-49c8-4275-a8b5-90f065c14dbd-utilities\") pod \"certified-operators-p89mp\" (UID: \"8883851f-49c8-4275-a8b5-90f065c14dbd\") " pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.296601 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsshg\" (UniqueName: \"kubernetes.io/projected/8883851f-49c8-4275-a8b5-90f065c14dbd-kube-api-access-lsshg\") pod \"certified-operators-p89mp\" (UID: \"8883851f-49c8-4275-a8b5-90f065c14dbd\") " pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.397432 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8883851f-49c8-4275-a8b5-90f065c14dbd-utilities\") pod \"certified-operators-p89mp\" (UID: \"8883851f-49c8-4275-a8b5-90f065c14dbd\") " pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.397854 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lsshg\" (UniqueName: \"kubernetes.io/projected/8883851f-49c8-4275-a8b5-90f065c14dbd-kube-api-access-lsshg\") pod \"certified-operators-p89mp\" (UID: \"8883851f-49c8-4275-a8b5-90f065c14dbd\") " pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.397990 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8883851f-49c8-4275-a8b5-90f065c14dbd-catalog-content\") pod \"certified-operators-p89mp\" (UID: \"8883851f-49c8-4275-a8b5-90f065c14dbd\") " pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.398608 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8883851f-49c8-4275-a8b5-90f065c14dbd-catalog-content\") pod \"certified-operators-p89mp\" (UID: \"8883851f-49c8-4275-a8b5-90f065c14dbd\") " pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.398996 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8883851f-49c8-4275-a8b5-90f065c14dbd-utilities\") pod \"certified-operators-p89mp\" (UID: \"8883851f-49c8-4275-a8b5-90f065c14dbd\") " pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.437513 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsshg\" (UniqueName: \"kubernetes.io/projected/8883851f-49c8-4275-a8b5-90f065c14dbd-kube-api-access-lsshg\") pod \"certified-operators-p89mp\" (UID: \"8883851f-49c8-4275-a8b5-90f065c14dbd\") " pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.498742 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.879089 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.879140 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:51 crc kubenswrapper[5173]: I1209 14:24:51.929859 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.119324 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" event={"ID":"9d182cae-1cf2-46b4-accb-db755e8d7f16","Type":"ContainerStarted","Data":"62c7ca1d06bb13e96e4d759cf9b231afc3f361662acbca92d3b37f30a6a53bf4"} Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.168251 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" podStartSLOduration=5.058037771 podStartE2EDuration="6.168233142s" podCreationTimestamp="2025-12-09 14:24:46 +0000 UTC" firstStartedPulling="2025-12-09 14:24:48.350723426 +0000 UTC m=+771.276005673" lastFinishedPulling="2025-12-09 14:24:49.460918787 +0000 UTC m=+772.386201044" observedRunningTime="2025-12-09 14:24:52.164135793 +0000 UTC m=+775.089418050" watchObservedRunningTime="2025-12-09 14:24:52.168233142 +0000 UTC m=+775.093515389" Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.172806 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p89mp"] Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.231026 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.608981 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.729636 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-bundle\") pod \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\" (UID: \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\") " Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.729703 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7m29\" (UniqueName: \"kubernetes.io/projected/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-kube-api-access-n7m29\") pod \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\" (UID: \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\") " Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.729727 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-util\") pod \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\" (UID: \"f00eb154-1469-44bb-bf3c-fecdfabc2a7f\") " Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.730397 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-bundle" (OuterVolumeSpecName: "bundle") pod "f00eb154-1469-44bb-bf3c-fecdfabc2a7f" (UID: "f00eb154-1469-44bb-bf3c-fecdfabc2a7f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.735294 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-kube-api-access-n7m29" (OuterVolumeSpecName: "kube-api-access-n7m29") pod "f00eb154-1469-44bb-bf3c-fecdfabc2a7f" (UID: "f00eb154-1469-44bb-bf3c-fecdfabc2a7f"). InnerVolumeSpecName "kube-api-access-n7m29". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.738745 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-util" (OuterVolumeSpecName: "util") pod "f00eb154-1469-44bb-bf3c-fecdfabc2a7f" (UID: "f00eb154-1469-44bb-bf3c-fecdfabc2a7f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.831024 5173 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.831078 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n7m29\" (UniqueName: \"kubernetes.io/projected/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-kube-api-access-n7m29\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:52 crc kubenswrapper[5173]: I1209 14:24:52.831091 5173 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f00eb154-1469-44bb-bf3c-fecdfabc2a7f-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:53 crc kubenswrapper[5173]: I1209 14:24:53.126313 5173 generic.go:358] "Generic (PLEG): container finished" podID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerID="c1a4a24535fab5d82eab0277cb8ffb3a5206ac1c4405f3b302f2e432e5e8d529" exitCode=0 Dec 09 14:24:53 crc kubenswrapper[5173]: I1209 14:24:53.126470 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p89mp" event={"ID":"8883851f-49c8-4275-a8b5-90f065c14dbd","Type":"ContainerDied","Data":"c1a4a24535fab5d82eab0277cb8ffb3a5206ac1c4405f3b302f2e432e5e8d529"} Dec 09 14:24:53 crc kubenswrapper[5173]: I1209 14:24:53.126496 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p89mp" event={"ID":"8883851f-49c8-4275-a8b5-90f065c14dbd","Type":"ContainerStarted","Data":"88191a00d64558519fa2cf7cc7220ea6cf3af656361f176d06e12bd842ede7d7"} Dec 09 14:24:53 crc kubenswrapper[5173]: I1209 14:24:53.128542 5173 generic.go:358] "Generic (PLEG): container finished" podID="9d182cae-1cf2-46b4-accb-db755e8d7f16" containerID="62c7ca1d06bb13e96e4d759cf9b231afc3f361662acbca92d3b37f30a6a53bf4" exitCode=0 Dec 09 14:24:53 crc kubenswrapper[5173]: I1209 14:24:53.128594 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" event={"ID":"9d182cae-1cf2-46b4-accb-db755e8d7f16","Type":"ContainerDied","Data":"62c7ca1d06bb13e96e4d759cf9b231afc3f361662acbca92d3b37f30a6a53bf4"} Dec 09 14:24:53 crc kubenswrapper[5173]: I1209 14:24:53.130941 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" Dec 09 14:24:53 crc kubenswrapper[5173]: I1209 14:24:53.130943 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fjxqxt" event={"ID":"f00eb154-1469-44bb-bf3c-fecdfabc2a7f","Type":"ContainerDied","Data":"b73f31230e799c5d7a413bbe9dac4b68ae0872ea670d6d2ef054802ff24f804a"} Dec 09 14:24:53 crc kubenswrapper[5173]: I1209 14:24:53.131117 5173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b73f31230e799c5d7a413bbe9dac4b68ae0872ea670d6d2ef054802ff24f804a" Dec 09 14:24:53 crc kubenswrapper[5173]: I1209 14:24:53.416598 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dkd82"] Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.137201 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dkd82" podUID="fa0a7a74-d3a8-4278-9831-91fdd079f449" containerName="registry-server" containerID="cri-o://4215c7a07c3e1b909cbb411ae2640aeac348319f6d1becb2f9a08d4226b2f2a7" gracePeriod=2 Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.207650 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr"] Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.215447 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f00eb154-1469-44bb-bf3c-fecdfabc2a7f" containerName="util" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.215703 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00eb154-1469-44bb-bf3c-fecdfabc2a7f" containerName="util" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.215810 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f00eb154-1469-44bb-bf3c-fecdfabc2a7f" containerName="pull" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.215898 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00eb154-1469-44bb-bf3c-fecdfabc2a7f" containerName="pull" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.216012 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f00eb154-1469-44bb-bf3c-fecdfabc2a7f" containerName="extract" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.216097 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00eb154-1469-44bb-bf3c-fecdfabc2a7f" containerName="extract" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.216346 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="f00eb154-1469-44bb-bf3c-fecdfabc2a7f" containerName="extract" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.287041 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr"] Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.287451 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.289159 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snmxx\" (UniqueName: \"kubernetes.io/projected/232e3462-a5e6-4098-b4bd-018cba0b4444-kube-api-access-snmxx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr\" (UID: \"232e3462-a5e6-4098-b4bd-018cba0b4444\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.289226 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/232e3462-a5e6-4098-b4bd-018cba0b4444-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr\" (UID: \"232e3462-a5e6-4098-b4bd-018cba0b4444\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.289288 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/232e3462-a5e6-4098-b4bd-018cba0b4444-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr\" (UID: \"232e3462-a5e6-4098-b4bd-018cba0b4444\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.390153 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-snmxx\" (UniqueName: \"kubernetes.io/projected/232e3462-a5e6-4098-b4bd-018cba0b4444-kube-api-access-snmxx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr\" (UID: \"232e3462-a5e6-4098-b4bd-018cba0b4444\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.390215 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/232e3462-a5e6-4098-b4bd-018cba0b4444-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr\" (UID: \"232e3462-a5e6-4098-b4bd-018cba0b4444\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.390271 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/232e3462-a5e6-4098-b4bd-018cba0b4444-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr\" (UID: \"232e3462-a5e6-4098-b4bd-018cba0b4444\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.390832 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/232e3462-a5e6-4098-b4bd-018cba0b4444-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr\" (UID: \"232e3462-a5e6-4098-b4bd-018cba0b4444\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.390855 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/232e3462-a5e6-4098-b4bd-018cba0b4444-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr\" (UID: \"232e3462-a5e6-4098-b4bd-018cba0b4444\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.446276 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-snmxx\" (UniqueName: \"kubernetes.io/projected/232e3462-a5e6-4098-b4bd-018cba0b4444-kube-api-access-snmxx\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr\" (UID: \"232e3462-a5e6-4098-b4bd-018cba0b4444\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.603628 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.890342 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.898165 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d182cae-1cf2-46b4-accb-db755e8d7f16-bundle\") pod \"9d182cae-1cf2-46b4-accb-db755e8d7f16\" (UID: \"9d182cae-1cf2-46b4-accb-db755e8d7f16\") " Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.898475 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d182cae-1cf2-46b4-accb-db755e8d7f16-util\") pod \"9d182cae-1cf2-46b4-accb-db755e8d7f16\" (UID: \"9d182cae-1cf2-46b4-accb-db755e8d7f16\") " Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.898506 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mvln\" (UniqueName: \"kubernetes.io/projected/9d182cae-1cf2-46b4-accb-db755e8d7f16-kube-api-access-5mvln\") pod \"9d182cae-1cf2-46b4-accb-db755e8d7f16\" (UID: \"9d182cae-1cf2-46b4-accb-db755e8d7f16\") " Dec 09 14:24:54 crc kubenswrapper[5173]: I1209 14:24:54.913557 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d182cae-1cf2-46b4-accb-db755e8d7f16-kube-api-access-5mvln" (OuterVolumeSpecName: "kube-api-access-5mvln") pod "9d182cae-1cf2-46b4-accb-db755e8d7f16" (UID: "9d182cae-1cf2-46b4-accb-db755e8d7f16"). InnerVolumeSpecName "kube-api-access-5mvln". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:24:55 crc kubenswrapper[5173]: I1209 14:24:55.020216 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5mvln\" (UniqueName: \"kubernetes.io/projected/9d182cae-1cf2-46b4-accb-db755e8d7f16-kube-api-access-5mvln\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:55 crc kubenswrapper[5173]: I1209 14:24:55.116818 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d182cae-1cf2-46b4-accb-db755e8d7f16-util" (OuterVolumeSpecName: "util") pod "9d182cae-1cf2-46b4-accb-db755e8d7f16" (UID: "9d182cae-1cf2-46b4-accb-db755e8d7f16"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:24:55 crc kubenswrapper[5173]: I1209 14:24:55.178539 5173 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d182cae-1cf2-46b4-accb-db755e8d7f16-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:55 crc kubenswrapper[5173]: I1209 14:24:55.255744 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" event={"ID":"9d182cae-1cf2-46b4-accb-db755e8d7f16","Type":"ContainerDied","Data":"d1eec721e1880a018f16311e64b362d1eadb7cffdcbcfc4cfc283e9887d0476a"} Dec 09 14:24:55 crc kubenswrapper[5173]: I1209 14:24:55.255807 5173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1eec721e1880a018f16311e64b362d1eadb7cffdcbcfc4cfc283e9887d0476a" Dec 09 14:24:55 crc kubenswrapper[5173]: I1209 14:24:55.255904 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7dfnr" Dec 09 14:24:55 crc kubenswrapper[5173]: I1209 14:24:55.381813 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d182cae-1cf2-46b4-accb-db755e8d7f16-bundle" (OuterVolumeSpecName: "bundle") pod "9d182cae-1cf2-46b4-accb-db755e8d7f16" (UID: "9d182cae-1cf2-46b4-accb-db755e8d7f16"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:24:55 crc kubenswrapper[5173]: I1209 14:24:55.383628 5173 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d182cae-1cf2-46b4-accb-db755e8d7f16-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:55 crc kubenswrapper[5173]: I1209 14:24:55.801335 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr"] Dec 09 14:24:56 crc kubenswrapper[5173]: I1209 14:24:56.260844 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" event={"ID":"232e3462-a5e6-4098-b4bd-018cba0b4444","Type":"ContainerStarted","Data":"8ac824694f442ebd10794fe54048ace7122cd427bdd8fcbf3c3cceff14f332c6"} Dec 09 14:24:56 crc kubenswrapper[5173]: I1209 14:24:56.888623 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-sxl6t"] Dec 09 14:24:56 crc kubenswrapper[5173]: I1209 14:24:56.889336 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d182cae-1cf2-46b4-accb-db755e8d7f16" containerName="extract" Dec 09 14:24:56 crc kubenswrapper[5173]: I1209 14:24:56.889375 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d182cae-1cf2-46b4-accb-db755e8d7f16" containerName="extract" Dec 09 14:24:56 crc kubenswrapper[5173]: I1209 14:24:56.889397 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d182cae-1cf2-46b4-accb-db755e8d7f16" containerName="util" Dec 09 14:24:56 crc kubenswrapper[5173]: I1209 14:24:56.889404 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d182cae-1cf2-46b4-accb-db755e8d7f16" containerName="util" Dec 09 14:24:56 crc kubenswrapper[5173]: I1209 14:24:56.889432 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d182cae-1cf2-46b4-accb-db755e8d7f16" containerName="pull" Dec 09 14:24:56 crc kubenswrapper[5173]: I1209 14:24:56.889440 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d182cae-1cf2-46b4-accb-db755e8d7f16" containerName="pull" Dec 09 14:24:56 crc kubenswrapper[5173]: I1209 14:24:56.889546 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="9d182cae-1cf2-46b4-accb-db755e8d7f16" containerName="extract" Dec 09 14:24:57 crc kubenswrapper[5173]: I1209 14:24:57.743638 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-sxl6t"] Dec 09 14:24:57 crc kubenswrapper[5173]: I1209 14:24:57.743909 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7"] Dec 09 14:24:57 crc kubenswrapper[5173]: I1209 14:24:57.743445 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-sxl6t" Dec 09 14:24:57 crc kubenswrapper[5173]: I1209 14:24:57.751834 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 09 14:24:57 crc kubenswrapper[5173]: I1209 14:24:57.751921 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-mmtn4\"" Dec 09 14:24:57 crc kubenswrapper[5173]: I1209 14:24:57.757766 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 09 14:24:57 crc kubenswrapper[5173]: I1209 14:24:57.815163 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqgdn\" (UniqueName: \"kubernetes.io/projected/a05293e8-ec70-4688-b88e-53cf4499f45f-kube-api-access-sqgdn\") pod \"obo-prometheus-operator-86648f486b-sxl6t\" (UID: \"a05293e8-ec70-4688-b88e-53cf4499f45f\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-sxl6t" Dec 09 14:24:57 crc kubenswrapper[5173]: I1209 14:24:57.916253 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sqgdn\" (UniqueName: \"kubernetes.io/projected/a05293e8-ec70-4688-b88e-53cf4499f45f-kube-api-access-sqgdn\") pod \"obo-prometheus-operator-86648f486b-sxl6t\" (UID: \"a05293e8-ec70-4688-b88e-53cf4499f45f\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-sxl6t" Dec 09 14:24:57 crc kubenswrapper[5173]: I1209 14:24:57.953132 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqgdn\" (UniqueName: \"kubernetes.io/projected/a05293e8-ec70-4688-b88e-53cf4499f45f-kube-api-access-sqgdn\") pod \"obo-prometheus-operator-86648f486b-sxl6t\" (UID: \"a05293e8-ec70-4688-b88e-53cf4499f45f\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-sxl6t" Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.070587 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-sxl6t" Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.279858 5173 generic.go:358] "Generic (PLEG): container finished" podID="fa0a7a74-d3a8-4278-9831-91fdd079f449" containerID="4215c7a07c3e1b909cbb411ae2640aeac348319f6d1becb2f9a08d4226b2f2a7" exitCode=0 Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.300493 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-56z8w" Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.301334 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7" Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.306925 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.306998 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-mbvlh\"" Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.312805 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n"] Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.321704 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d45effc-e479-4f74-8a14-21d616fee747-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7\" (UID: \"9d45effc-e479-4f74-8a14-21d616fee747\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7" Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.321886 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d45effc-e479-4f74-8a14-21d616fee747-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7\" (UID: \"9d45effc-e479-4f74-8a14-21d616fee747\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7" Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.423326 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d45effc-e479-4f74-8a14-21d616fee747-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7\" (UID: \"9d45effc-e479-4f74-8a14-21d616fee747\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7" Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.423767 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d45effc-e479-4f74-8a14-21d616fee747-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7\" (UID: \"9d45effc-e479-4f74-8a14-21d616fee747\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7" Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.430213 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d45effc-e479-4f74-8a14-21d616fee747-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7\" (UID: \"9d45effc-e479-4f74-8a14-21d616fee747\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7" Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.430798 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d45effc-e479-4f74-8a14-21d616fee747-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7\" (UID: \"9d45effc-e479-4f74-8a14-21d616fee747\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7" Dec 09 14:24:58 crc kubenswrapper[5173]: I1209 14:24:58.638237 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.130838 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkd82" event={"ID":"fa0a7a74-d3a8-4278-9831-91fdd079f449","Type":"ContainerDied","Data":"4215c7a07c3e1b909cbb411ae2640aeac348319f6d1becb2f9a08d4226b2f2a7"} Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.131232 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7"] Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.131255 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n"] Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.131267 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-hqwqw"] Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.131002 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.243919 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9278752b-e88e-41eb-86e5-524982fe7006-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n\" (UID: \"9278752b-e88e-41eb-86e5-524982fe7006\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.243989 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9278752b-e88e-41eb-86e5-524982fe7006-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n\" (UID: \"9278752b-e88e-41eb-86e5-524982fe7006\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.260262 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-hqwqw"] Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.260321 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-2mdk2"] Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.260460 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-hqwqw" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.262799 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.263021 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-glt27\"" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.313979 5173 generic.go:358] "Generic (PLEG): container finished" podID="232e3462-a5e6-4098-b4bd-018cba0b4444" containerID="8189980ff49398445577c63b4dd1d447cd1ad4c9fe6e60c5236be761761c5d5d" exitCode=0 Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.346079 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9278752b-e88e-41eb-86e5-524982fe7006-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n\" (UID: \"9278752b-e88e-41eb-86e5-524982fe7006\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.346162 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f24447b8-3381-4260-a649-74fd7d2e932c-observability-operator-tls\") pod \"observability-operator-78c97476f4-hqwqw\" (UID: \"f24447b8-3381-4260-a649-74fd7d2e932c\") " pod="openshift-operators/observability-operator-78c97476f4-hqwqw" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.346185 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9278752b-e88e-41eb-86e5-524982fe7006-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n\" (UID: \"9278752b-e88e-41eb-86e5-524982fe7006\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.346234 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v7x8\" (UniqueName: \"kubernetes.io/projected/f24447b8-3381-4260-a649-74fd7d2e932c-kube-api-access-8v7x8\") pod \"observability-operator-78c97476f4-hqwqw\" (UID: \"f24447b8-3381-4260-a649-74fd7d2e932c\") " pod="openshift-operators/observability-operator-78c97476f4-hqwqw" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.347487 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-2mdk2"] Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.347545 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7" event={"ID":"9d45effc-e479-4f74-8a14-21d616fee747","Type":"ContainerStarted","Data":"0a430da2ba3d09e6e8329c6d48f867721ad757f3d88a9216a03066290c6d79b7"} Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.347594 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-sxl6t" event={"ID":"a05293e8-ec70-4688-b88e-53cf4499f45f","Type":"ContainerStarted","Data":"ccbf3be3e5471ccb1245f2593d8dd48a4eb843b96cd189e2c3a584f3c62c72f0"} Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.347606 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" event={"ID":"232e3462-a5e6-4098-b4bd-018cba0b4444","Type":"ContainerDied","Data":"8189980ff49398445577c63b4dd1d447cd1ad4c9fe6e60c5236be761761c5d5d"} Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.347627 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-sxl6t"] Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.347644 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tpkl8"] Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.347665 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7"] Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.348533 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.354987 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-pxwlp\"" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.355544 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.358281 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9278752b-e88e-41eb-86e5-524982fe7006-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n\" (UID: \"9278752b-e88e-41eb-86e5-524982fe7006\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.392479 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9278752b-e88e-41eb-86e5-524982fe7006-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n\" (UID: \"9278752b-e88e-41eb-86e5-524982fe7006\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.447525 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0a7a74-d3a8-4278-9831-91fdd079f449-utilities\") pod \"fa0a7a74-d3a8-4278-9831-91fdd079f449\" (UID: \"fa0a7a74-d3a8-4278-9831-91fdd079f449\") " Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.447641 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0a7a74-d3a8-4278-9831-91fdd079f449-catalog-content\") pod \"fa0a7a74-d3a8-4278-9831-91fdd079f449\" (UID: \"fa0a7a74-d3a8-4278-9831-91fdd079f449\") " Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.447676 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nftqf\" (UniqueName: \"kubernetes.io/projected/fa0a7a74-d3a8-4278-9831-91fdd079f449-kube-api-access-nftqf\") pod \"fa0a7a74-d3a8-4278-9831-91fdd079f449\" (UID: \"fa0a7a74-d3a8-4278-9831-91fdd079f449\") " Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.447903 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f24447b8-3381-4260-a649-74fd7d2e932c-observability-operator-tls\") pod \"observability-operator-78c97476f4-hqwqw\" (UID: \"f24447b8-3381-4260-a649-74fd7d2e932c\") " pod="openshift-operators/observability-operator-78c97476f4-hqwqw" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.447927 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d07da49f-f0f2-47a1-a5a9-553a7c6266f0-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-2mdk2\" (UID: \"d07da49f-f0f2-47a1-a5a9-553a7c6266f0\") " pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.447956 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6r76\" (UniqueName: \"kubernetes.io/projected/d07da49f-f0f2-47a1-a5a9-553a7c6266f0-kube-api-access-v6r76\") pod \"perses-operator-68bdb49cbf-2mdk2\" (UID: \"d07da49f-f0f2-47a1-a5a9-553a7c6266f0\") " pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.447991 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8v7x8\" (UniqueName: \"kubernetes.io/projected/f24447b8-3381-4260-a649-74fd7d2e932c-kube-api-access-8v7x8\") pod \"observability-operator-78c97476f4-hqwqw\" (UID: \"f24447b8-3381-4260-a649-74fd7d2e932c\") " pod="openshift-operators/observability-operator-78c97476f4-hqwqw" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.451496 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.453190 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa0a7a74-d3a8-4278-9831-91fdd079f449-utilities" (OuterVolumeSpecName: "utilities") pod "fa0a7a74-d3a8-4278-9831-91fdd079f449" (UID: "fa0a7a74-d3a8-4278-9831-91fdd079f449"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.470921 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0a7a74-d3a8-4278-9831-91fdd079f449-kube-api-access-nftqf" (OuterVolumeSpecName: "kube-api-access-nftqf") pod "fa0a7a74-d3a8-4278-9831-91fdd079f449" (UID: "fa0a7a74-d3a8-4278-9831-91fdd079f449"). InnerVolumeSpecName "kube-api-access-nftqf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.471241 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f24447b8-3381-4260-a649-74fd7d2e932c-observability-operator-tls\") pod \"observability-operator-78c97476f4-hqwqw\" (UID: \"f24447b8-3381-4260-a649-74fd7d2e932c\") " pod="openshift-operators/observability-operator-78c97476f4-hqwqw" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.499268 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v7x8\" (UniqueName: \"kubernetes.io/projected/f24447b8-3381-4260-a649-74fd7d2e932c-kube-api-access-8v7x8\") pod \"observability-operator-78c97476f4-hqwqw\" (UID: \"f24447b8-3381-4260-a649-74fd7d2e932c\") " pod="openshift-operators/observability-operator-78c97476f4-hqwqw" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.551075 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d07da49f-f0f2-47a1-a5a9-553a7c6266f0-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-2mdk2\" (UID: \"d07da49f-f0f2-47a1-a5a9-553a7c6266f0\") " pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.551132 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v6r76\" (UniqueName: \"kubernetes.io/projected/d07da49f-f0f2-47a1-a5a9-553a7c6266f0-kube-api-access-v6r76\") pod \"perses-operator-68bdb49cbf-2mdk2\" (UID: \"d07da49f-f0f2-47a1-a5a9-553a7c6266f0\") " pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.551193 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0a7a74-d3a8-4278-9831-91fdd079f449-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.551204 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nftqf\" (UniqueName: \"kubernetes.io/projected/fa0a7a74-d3a8-4278-9831-91fdd079f449-kube-api-access-nftqf\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.552261 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d07da49f-f0f2-47a1-a5a9-553a7c6266f0-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-2mdk2\" (UID: \"d07da49f-f0f2-47a1-a5a9-553a7c6266f0\") " pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.563939 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa0a7a74-d3a8-4278-9831-91fdd079f449-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa0a7a74-d3a8-4278-9831-91fdd079f449" (UID: "fa0a7a74-d3a8-4278-9831-91fdd079f449"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.577227 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6r76\" (UniqueName: \"kubernetes.io/projected/d07da49f-f0f2-47a1-a5a9-553a7c6266f0-kube-api-access-v6r76\") pod \"perses-operator-68bdb49cbf-2mdk2\" (UID: \"d07da49f-f0f2-47a1-a5a9-553a7c6266f0\") " pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.644170 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-hqwqw" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.652132 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0a7a74-d3a8-4278-9831-91fdd079f449-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:24:59 crc kubenswrapper[5173]: I1209 14:24:59.721256 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.147017 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n"] Dec 09 14:25:00 crc kubenswrapper[5173]: W1209 14:25:00.283891 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf24447b8_3381_4260_a649_74fd7d2e932c.slice/crio-7363c37ddfdb1e8067a4e4ccc358679da80b546e087a8774a11fcd9d135d35f8 WatchSource:0}: Error finding container 7363c37ddfdb1e8067a4e4ccc358679da80b546e087a8774a11fcd9d135d35f8: Status 404 returned error can't find the container with id 7363c37ddfdb1e8067a4e4ccc358679da80b546e087a8774a11fcd9d135d35f8 Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.288220 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-2mdk2"] Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.290000 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-hqwqw"] Dec 09 14:25:00 crc kubenswrapper[5173]: W1209 14:25:00.322075 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd07da49f_f0f2_47a1_a5a9_553a7c6266f0.slice/crio-e7a99b216ef33c1e1d9a554f1cf6c20d4b9b62e1ca49be91fb82b885f9530e5a WatchSource:0}: Error finding container e7a99b216ef33c1e1d9a554f1cf6c20d4b9b62e1ca49be91fb82b885f9530e5a: Status 404 returned error can't find the container with id e7a99b216ef33c1e1d9a554f1cf6c20d4b9b62e1ca49be91fb82b885f9530e5a Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.341343 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-hqwqw" event={"ID":"f24447b8-3381-4260-a649-74fd7d2e932c","Type":"ContainerStarted","Data":"7363c37ddfdb1e8067a4e4ccc358679da80b546e087a8774a11fcd9d135d35f8"} Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.349611 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dkd82" event={"ID":"fa0a7a74-d3a8-4278-9831-91fdd079f449","Type":"ContainerDied","Data":"d1cc5daa2e90ad97fb672b6052df94c0866ceb2d62c148a8846ae4bd9dc7cd2a"} Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.349658 5173 scope.go:117] "RemoveContainer" containerID="4215c7a07c3e1b909cbb411ae2640aeac348319f6d1becb2f9a08d4226b2f2a7" Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.349792 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dkd82" Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.354929 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n" event={"ID":"9278752b-e88e-41eb-86e5-524982fe7006","Type":"ContainerStarted","Data":"4d769a7d7152bd8abcc8ad1b3974623422e026fb483e168386b97a42e1fa5c14"} Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.358103 5173 generic.go:358] "Generic (PLEG): container finished" podID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerID="86fefa54e79a3fea5c23ce75d2af005b513a7f04f1daa91d4cec5e8d910ad0a9" exitCode=0 Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.358188 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p89mp" event={"ID":"8883851f-49c8-4275-a8b5-90f065c14dbd","Type":"ContainerDied","Data":"86fefa54e79a3fea5c23ce75d2af005b513a7f04f1daa91d4cec5e8d910ad0a9"} Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.373633 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dkd82"] Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.380291 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dkd82"] Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.395669 5173 scope.go:117] "RemoveContainer" containerID="2a553f9703db7b043b7ff51c4eac01f0f7871258543a7218ae860e348eaad21c" Dec 09 14:25:00 crc kubenswrapper[5173]: I1209 14:25:00.416485 5173 scope.go:117] "RemoveContainer" containerID="7191d8b7f8840bc0c09f58c44ef6e62d6acf5021a2e59a2de8cfe5d2bb2f7fb2" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.115654 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-9z2bl"] Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.116830 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0a7a74-d3a8-4278-9831-91fdd079f449" containerName="extract-content" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.116855 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0a7a74-d3a8-4278-9831-91fdd079f449" containerName="extract-content" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.116874 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0a7a74-d3a8-4278-9831-91fdd079f449" containerName="extract-utilities" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.116883 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0a7a74-d3a8-4278-9831-91fdd079f449" containerName="extract-utilities" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.116919 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fa0a7a74-d3a8-4278-9831-91fdd079f449" containerName="registry-server" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.116927 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0a7a74-d3a8-4278-9831-91fdd079f449" containerName="registry-server" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.117043 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="fa0a7a74-d3a8-4278-9831-91fdd079f449" containerName="registry-server" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.128992 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-9z2bl"] Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.129159 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-9z2bl" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.130865 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-sfqmb\"" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.131695 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.133155 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.187556 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68ztq\" (UniqueName: \"kubernetes.io/projected/a61f29ed-1404-4278-bad9-494720ee12cf-kube-api-access-68ztq\") pod \"interconnect-operator-78b9bd8798-9z2bl\" (UID: \"a61f29ed-1404-4278-bad9-494720ee12cf\") " pod="service-telemetry/interconnect-operator-78b9bd8798-9z2bl" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.290433 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68ztq\" (UniqueName: \"kubernetes.io/projected/a61f29ed-1404-4278-bad9-494720ee12cf-kube-api-access-68ztq\") pod \"interconnect-operator-78b9bd8798-9z2bl\" (UID: \"a61f29ed-1404-4278-bad9-494720ee12cf\") " pod="service-telemetry/interconnect-operator-78b9bd8798-9z2bl" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.365178 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-68ztq\" (UniqueName: \"kubernetes.io/projected/a61f29ed-1404-4278-bad9-494720ee12cf-kube-api-access-68ztq\") pod \"interconnect-operator-78b9bd8798-9z2bl\" (UID: \"a61f29ed-1404-4278-bad9-494720ee12cf\") " pod="service-telemetry/interconnect-operator-78b9bd8798-9z2bl" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.430431 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" event={"ID":"d07da49f-f0f2-47a1-a5a9-553a7c6266f0","Type":"ContainerStarted","Data":"e7a99b216ef33c1e1d9a554f1cf6c20d4b9b62e1ca49be91fb82b885f9530e5a"} Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.443713 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-9z2bl" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.445681 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p89mp" event={"ID":"8883851f-49c8-4275-a8b5-90f065c14dbd","Type":"ContainerStarted","Data":"c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242"} Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.499730 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.499923 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:25:01 crc kubenswrapper[5173]: I1209 14:25:01.883091 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0a7a74-d3a8-4278-9831-91fdd079f449" path="/var/lib/kubelet/pods/fa0a7a74-d3a8-4278-9831-91fdd079f449/volumes" Dec 09 14:25:02 crc kubenswrapper[5173]: I1209 14:25:02.279064 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p89mp" podStartSLOduration=5.817504367 podStartE2EDuration="11.279048551s" podCreationTimestamp="2025-12-09 14:24:51 +0000 UTC" firstStartedPulling="2025-12-09 14:24:54.137824411 +0000 UTC m=+777.063106658" lastFinishedPulling="2025-12-09 14:24:59.599368595 +0000 UTC m=+782.524650842" observedRunningTime="2025-12-09 14:25:01.52939946 +0000 UTC m=+784.454681727" watchObservedRunningTime="2025-12-09 14:25:02.279048551 +0000 UTC m=+785.204330798" Dec 09 14:25:02 crc kubenswrapper[5173]: I1209 14:25:02.282725 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-9z2bl"] Dec 09 14:25:02 crc kubenswrapper[5173]: I1209 14:25:02.465010 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-9z2bl" event={"ID":"a61f29ed-1404-4278-bad9-494720ee12cf","Type":"ContainerStarted","Data":"282407361b50d38c49a12a3a9d97eccaa10246d31662aca853a3a6535814d5f5"} Dec 09 14:25:02 crc kubenswrapper[5173]: I1209 14:25:02.592537 5173 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p89mp" podUID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerName="registry-server" probeResult="failure" output=< Dec 09 14:25:02 crc kubenswrapper[5173]: timeout: failed to connect service ":50051" within 1s Dec 09 14:25:02 crc kubenswrapper[5173]: > Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.586626 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-7bbcfd86d7-xv7xt"] Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.597672 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.603502 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.603731 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-tcjpj\"" Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.605848 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7bbcfd86d7-xv7xt"] Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.762815 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r4jq\" (UniqueName: \"kubernetes.io/projected/2c8e44a6-f9c2-40c4-a97a-e958d9190097-kube-api-access-8r4jq\") pod \"elastic-operator-7bbcfd86d7-xv7xt\" (UID: \"2c8e44a6-f9c2-40c4-a97a-e958d9190097\") " pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.762868 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c8e44a6-f9c2-40c4-a97a-e958d9190097-apiservice-cert\") pod \"elastic-operator-7bbcfd86d7-xv7xt\" (UID: \"2c8e44a6-f9c2-40c4-a97a-e958d9190097\") " pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.762907 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c8e44a6-f9c2-40c4-a97a-e958d9190097-webhook-cert\") pod \"elastic-operator-7bbcfd86d7-xv7xt\" (UID: \"2c8e44a6-f9c2-40c4-a97a-e958d9190097\") " pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.863705 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8r4jq\" (UniqueName: \"kubernetes.io/projected/2c8e44a6-f9c2-40c4-a97a-e958d9190097-kube-api-access-8r4jq\") pod \"elastic-operator-7bbcfd86d7-xv7xt\" (UID: \"2c8e44a6-f9c2-40c4-a97a-e958d9190097\") " pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.863749 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c8e44a6-f9c2-40c4-a97a-e958d9190097-apiservice-cert\") pod \"elastic-operator-7bbcfd86d7-xv7xt\" (UID: \"2c8e44a6-f9c2-40c4-a97a-e958d9190097\") " pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.863773 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c8e44a6-f9c2-40c4-a97a-e958d9190097-webhook-cert\") pod \"elastic-operator-7bbcfd86d7-xv7xt\" (UID: \"2c8e44a6-f9c2-40c4-a97a-e958d9190097\") " pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.881397 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c8e44a6-f9c2-40c4-a97a-e958d9190097-webhook-cert\") pod \"elastic-operator-7bbcfd86d7-xv7xt\" (UID: \"2c8e44a6-f9c2-40c4-a97a-e958d9190097\") " pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.895222 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r4jq\" (UniqueName: \"kubernetes.io/projected/2c8e44a6-f9c2-40c4-a97a-e958d9190097-kube-api-access-8r4jq\") pod \"elastic-operator-7bbcfd86d7-xv7xt\" (UID: \"2c8e44a6-f9c2-40c4-a97a-e958d9190097\") " pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.905969 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c8e44a6-f9c2-40c4-a97a-e958d9190097-apiservice-cert\") pod \"elastic-operator-7bbcfd86d7-xv7xt\" (UID: \"2c8e44a6-f9c2-40c4-a97a-e958d9190097\") " pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" Dec 09 14:25:03 crc kubenswrapper[5173]: I1209 14:25:03.932583 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" Dec 09 14:25:04 crc kubenswrapper[5173]: I1209 14:25:04.399095 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7bbcfd86d7-xv7xt"] Dec 09 14:25:04 crc kubenswrapper[5173]: W1209 14:25:04.427968 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c8e44a6_f9c2_40c4_a97a_e958d9190097.slice/crio-47a25a3781130dd2d1750ed6380a6a01bf9015d0e0a76aea2f9f063741c0846e WatchSource:0}: Error finding container 47a25a3781130dd2d1750ed6380a6a01bf9015d0e0a76aea2f9f063741c0846e: Status 404 returned error can't find the container with id 47a25a3781130dd2d1750ed6380a6a01bf9015d0e0a76aea2f9f063741c0846e Dec 09 14:25:04 crc kubenswrapper[5173]: I1209 14:25:04.482910 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" event={"ID":"2c8e44a6-f9c2-40c4-a97a-e958d9190097","Type":"ContainerStarted","Data":"47a25a3781130dd2d1750ed6380a6a01bf9015d0e0a76aea2f9f063741c0846e"} Dec 09 14:25:11 crc kubenswrapper[5173]: I1209 14:25:11.555179 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:25:11 crc kubenswrapper[5173]: I1209 14:25:11.614238 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:25:12 crc kubenswrapper[5173]: I1209 14:25:12.709226 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p89mp"] Dec 09 14:25:12 crc kubenswrapper[5173]: I1209 14:25:12.709562 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p89mp" podUID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerName="registry-server" containerID="cri-o://c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242" gracePeriod=2 Dec 09 14:25:13 crc kubenswrapper[5173]: I1209 14:25:13.599822 5173 generic.go:358] "Generic (PLEG): container finished" podID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerID="c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242" exitCode=0 Dec 09 14:25:13 crc kubenswrapper[5173]: I1209 14:25:13.599979 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p89mp" event={"ID":"8883851f-49c8-4275-a8b5-90f065c14dbd","Type":"ContainerDied","Data":"c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242"} Dec 09 14:25:19 crc kubenswrapper[5173]: I1209 14:25:19.085247 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:25:19 crc kubenswrapper[5173]: I1209 14:25:19.085634 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:25:21 crc kubenswrapper[5173]: E1209 14:25:21.861563 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242 is running failed: container process not found" containerID="c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:25:21 crc kubenswrapper[5173]: E1209 14:25:21.862079 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242 is running failed: container process not found" containerID="c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:25:21 crc kubenswrapper[5173]: E1209 14:25:21.862474 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242 is running failed: container process not found" containerID="c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:25:21 crc kubenswrapper[5173]: E1209 14:25:21.862554 5173 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-p89mp" podUID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerName="registry-server" probeResult="unknown" Dec 09 14:25:24 crc kubenswrapper[5173]: I1209 14:25:24.405930 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" podUID="3f277bd6-ea48-4729-960f-5a2b97bbfecc" containerName="registry" containerID="cri-o://245907205bda48e06d3d5bfe7a589499facddefa96f0a5d9b52d28edb78a0e9f" gracePeriod=30 Dec 09 14:25:24 crc kubenswrapper[5173]: I1209 14:25:24.723796 5173 generic.go:358] "Generic (PLEG): container finished" podID="3f277bd6-ea48-4729-960f-5a2b97bbfecc" containerID="245907205bda48e06d3d5bfe7a589499facddefa96f0a5d9b52d28edb78a0e9f" exitCode=0 Dec 09 14:25:24 crc kubenswrapper[5173]: I1209 14:25:24.723889 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" event={"ID":"3f277bd6-ea48-4729-960f-5a2b97bbfecc","Type":"ContainerDied","Data":"245907205bda48e06d3d5bfe7a589499facddefa96f0a5d9b52d28edb78a0e9f"} Dec 09 14:25:26 crc kubenswrapper[5173]: I1209 14:25:26.822828 5173 patch_prober.go:28] interesting pod/image-registry-66587d64c8-tpkl8 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.7:5000/healthz\": dial tcp 10.217.0.7:5000: connect: connection refused" start-of-body= Dec 09 14:25:26 crc kubenswrapper[5173]: I1209 14:25:26.823376 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" podUID="3f277bd6-ea48-4729-960f-5a2b97bbfecc" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.7:5000/healthz\": dial tcp 10.217.0.7:5000: connect: connection refused" Dec 09 14:25:31 crc kubenswrapper[5173]: E1209 14:25:31.557270 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242 is running failed: container process not found" containerID="c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:25:31 crc kubenswrapper[5173]: E1209 14:25:31.557558 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242 is running failed: container process not found" containerID="c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:25:31 crc kubenswrapper[5173]: E1209 14:25:31.557728 5173 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242 is running failed: container process not found" containerID="c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 14:25:31 crc kubenswrapper[5173]: E1209 14:25:31.557761 5173 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-p89mp" podUID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerName="registry-server" probeResult="unknown" Dec 09 14:25:36 crc kubenswrapper[5173]: I1209 14:25:36.823340 5173 patch_prober.go:28] interesting pod/image-registry-66587d64c8-tpkl8 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.7:5000/healthz\": dial tcp 10.217.0.7:5000: connect: connection refused" start-of-body= Dec 09 14:25:36 crc kubenswrapper[5173]: I1209 14:25:36.823788 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" podUID="3f277bd6-ea48-4729-960f-5a2b97bbfecc" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.7:5000/healthz\": dial tcp 10.217.0.7:5000: connect: connection refused" Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.709834 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.780755 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsshg\" (UniqueName: \"kubernetes.io/projected/8883851f-49c8-4275-a8b5-90f065c14dbd-kube-api-access-lsshg\") pod \"8883851f-49c8-4275-a8b5-90f065c14dbd\" (UID: \"8883851f-49c8-4275-a8b5-90f065c14dbd\") " Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.780953 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8883851f-49c8-4275-a8b5-90f065c14dbd-catalog-content\") pod \"8883851f-49c8-4275-a8b5-90f065c14dbd\" (UID: \"8883851f-49c8-4275-a8b5-90f065c14dbd\") " Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.780975 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8883851f-49c8-4275-a8b5-90f065c14dbd-utilities\") pod \"8883851f-49c8-4275-a8b5-90f065c14dbd\" (UID: \"8883851f-49c8-4275-a8b5-90f065c14dbd\") " Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.782105 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8883851f-49c8-4275-a8b5-90f065c14dbd-utilities" (OuterVolumeSpecName: "utilities") pod "8883851f-49c8-4275-a8b5-90f065c14dbd" (UID: "8883851f-49c8-4275-a8b5-90f065c14dbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.787550 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8883851f-49c8-4275-a8b5-90f065c14dbd-kube-api-access-lsshg" (OuterVolumeSpecName: "kube-api-access-lsshg") pod "8883851f-49c8-4275-a8b5-90f065c14dbd" (UID: "8883851f-49c8-4275-a8b5-90f065c14dbd"). InnerVolumeSpecName "kube-api-access-lsshg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.813110 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8883851f-49c8-4275-a8b5-90f065c14dbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8883851f-49c8-4275-a8b5-90f065c14dbd" (UID: "8883851f-49c8-4275-a8b5-90f065c14dbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.887696 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lsshg\" (UniqueName: \"kubernetes.io/projected/8883851f-49c8-4275-a8b5-90f065c14dbd-kube-api-access-lsshg\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.888049 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8883851f-49c8-4275-a8b5-90f065c14dbd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.888061 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8883851f-49c8-4275-a8b5-90f065c14dbd-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.947335 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p89mp" event={"ID":"8883851f-49c8-4275-a8b5-90f065c14dbd","Type":"ContainerDied","Data":"88191a00d64558519fa2cf7cc7220ea6cf3af656361f176d06e12bd842ede7d7"} Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.947424 5173 scope.go:117] "RemoveContainer" containerID="c22412b581606ec371e5ce8f312dd9907e2fa1b923879aa400f9355295bf2242" Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.947442 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p89mp" Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.974201 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p89mp"] Dec 09 14:25:37 crc kubenswrapper[5173]: I1209 14:25:37.978448 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p89mp"] Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.008331 5173 scope.go:117] "RemoveContainer" containerID="86fefa54e79a3fea5c23ce75d2af005b513a7f04f1daa91d4cec5e8d910ad0a9" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.051197 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.069778 5173 scope.go:117] "RemoveContainer" containerID="c1a4a24535fab5d82eab0277cb8ffb3a5206ac1c4405f3b302f2e432e5e8d529" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.090890 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f277bd6-ea48-4729-960f-5a2b97bbfecc-ca-trust-extracted\") pod \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.090963 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-bound-sa-token\") pod \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.091027 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w495\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-kube-api-access-4w495\") pod \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.091173 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.091245 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-registry-tls\") pod \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.091338 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f277bd6-ea48-4729-960f-5a2b97bbfecc-registry-certificates\") pod \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.091505 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f277bd6-ea48-4729-960f-5a2b97bbfecc-trusted-ca\") pod \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.091537 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f277bd6-ea48-4729-960f-5a2b97bbfecc-installation-pull-secrets\") pod \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\" (UID: \"3f277bd6-ea48-4729-960f-5a2b97bbfecc\") " Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.092413 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f277bd6-ea48-4729-960f-5a2b97bbfecc-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3f277bd6-ea48-4729-960f-5a2b97bbfecc" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.092463 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f277bd6-ea48-4729-960f-5a2b97bbfecc-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "3f277bd6-ea48-4729-960f-5a2b97bbfecc" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.098960 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-kube-api-access-4w495" (OuterVolumeSpecName: "kube-api-access-4w495") pod "3f277bd6-ea48-4729-960f-5a2b97bbfecc" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc"). InnerVolumeSpecName "kube-api-access-4w495". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.100496 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f277bd6-ea48-4729-960f-5a2b97bbfecc-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "3f277bd6-ea48-4729-960f-5a2b97bbfecc" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.102957 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "3f277bd6-ea48-4729-960f-5a2b97bbfecc" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.110291 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "3f277bd6-ea48-4729-960f-5a2b97bbfecc" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.110604 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "3f277bd6-ea48-4729-960f-5a2b97bbfecc" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.128058 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f277bd6-ea48-4729-960f-5a2b97bbfecc-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "3f277bd6-ea48-4729-960f-5a2b97bbfecc" (UID: "3f277bd6-ea48-4729-960f-5a2b97bbfecc"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.193639 5173 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f277bd6-ea48-4729-960f-5a2b97bbfecc-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.193677 5173 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.193686 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4w495\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-kube-api-access-4w495\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.193696 5173 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f277bd6-ea48-4729-960f-5a2b97bbfecc-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.193705 5173 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f277bd6-ea48-4729-960f-5a2b97bbfecc-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.193713 5173 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f277bd6-ea48-4729-960f-5a2b97bbfecc-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.193722 5173 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f277bd6-ea48-4729-960f-5a2b97bbfecc-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.954320 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.954387 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-tpkl8" event={"ID":"3f277bd6-ea48-4729-960f-5a2b97bbfecc","Type":"ContainerDied","Data":"0b52cd63eea125196b8d8adaf8cddd77536dce36cb814f9a2501b416545be835"} Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.954853 5173 scope.go:117] "RemoveContainer" containerID="245907205bda48e06d3d5bfe7a589499facddefa96f0a5d9b52d28edb78a0e9f" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.957858 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n" event={"ID":"9278752b-e88e-41eb-86e5-524982fe7006","Type":"ContainerStarted","Data":"5bd00e604cd0a4504e28b38e1a363bf919643953534f82de504497cd029dbb02"} Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.962767 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7" event={"ID":"9d45effc-e479-4f74-8a14-21d616fee747","Type":"ContainerStarted","Data":"0c5e9f1f7524ce2240c5e825615a17891c09e80ac78a2943bd8c0e3b5372917e"} Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.967197 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" event={"ID":"d07da49f-f0f2-47a1-a5a9-553a7c6266f0","Type":"ContainerStarted","Data":"299ea181a955baf8ac6531fa0964e4f96ffb11b129da3b584028abe6e95407ca"} Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.967382 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.969065 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-9z2bl" event={"ID":"a61f29ed-1404-4278-bad9-494720ee12cf","Type":"ContainerStarted","Data":"3765fa0da44687509262a1a1b04538a1efb77d4371e5b7652bcfa0f986d27d7a"} Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.976051 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-sxl6t" event={"ID":"a05293e8-ec70-4688-b88e-53cf4499f45f","Type":"ContainerStarted","Data":"9fa49d2f75b68f3d879222c0dd2018f9998b165cb542641a6577e543bafbc2cd"} Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.978179 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-hqwqw" event={"ID":"f24447b8-3381-4260-a649-74fd7d2e932c","Type":"ContainerStarted","Data":"b4897aad5c1ac17d0d2d3efab46851c33a410a2fdb5ea606987d54b8ca149b7f"} Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.978621 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-hqwqw" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.982333 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-hqwqw" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.982712 5173 generic.go:358] "Generic (PLEG): container finished" podID="232e3462-a5e6-4098-b4bd-018cba0b4444" containerID="ef9699dcc69850a7a529bc61241e2c32ac00090dad4480279f19be9780975d60" exitCode=0 Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.982809 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" event={"ID":"232e3462-a5e6-4098-b4bd-018cba0b4444","Type":"ContainerDied","Data":"ef9699dcc69850a7a529bc61241e2c32ac00090dad4480279f19be9780975d60"} Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.987061 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-vjp5n" podStartSLOduration=4.043562624 podStartE2EDuration="41.987041573s" podCreationTimestamp="2025-12-09 14:24:57 +0000 UTC" firstStartedPulling="2025-12-09 14:25:00.179671591 +0000 UTC m=+783.104953838" lastFinishedPulling="2025-12-09 14:25:38.12315055 +0000 UTC m=+821.048432787" observedRunningTime="2025-12-09 14:25:38.983058388 +0000 UTC m=+821.908340655" watchObservedRunningTime="2025-12-09 14:25:38.987041573 +0000 UTC m=+821.912323820" Dec 09 14:25:38 crc kubenswrapper[5173]: I1209 14:25:38.990472 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" event={"ID":"2c8e44a6-f9c2-40c4-a97a-e958d9190097","Type":"ContainerStarted","Data":"401f3420c75e3ff772deadb16fca3f3b506e7f7f1387767e8bce1fa432d8d86c"} Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.020373 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75c7d7775b-h7dp7" podStartSLOduration=3.938769717 podStartE2EDuration="43.0203392s" podCreationTimestamp="2025-12-09 14:24:56 +0000 UTC" firstStartedPulling="2025-12-09 14:24:58.998210423 +0000 UTC m=+781.923492670" lastFinishedPulling="2025-12-09 14:25:38.079779896 +0000 UTC m=+821.005062153" observedRunningTime="2025-12-09 14:25:39.008875919 +0000 UTC m=+821.934158176" watchObservedRunningTime="2025-12-09 14:25:39.0203392 +0000 UTC m=+821.945621447" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.030331 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-9z2bl" podStartSLOduration=2.252551592 podStartE2EDuration="38.030306364s" podCreationTimestamp="2025-12-09 14:25:01 +0000 UTC" firstStartedPulling="2025-12-09 14:25:02.31272005 +0000 UTC m=+785.238002297" lastFinishedPulling="2025-12-09 14:25:38.090474822 +0000 UTC m=+821.015757069" observedRunningTime="2025-12-09 14:25:39.024657716 +0000 UTC m=+821.949939993" watchObservedRunningTime="2025-12-09 14:25:39.030306364 +0000 UTC m=+821.955588621" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.050072 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" podStartSLOduration=15.557301865 podStartE2EDuration="42.050057445s" podCreationTimestamp="2025-12-09 14:24:57 +0000 UTC" firstStartedPulling="2025-12-09 14:25:00.325726604 +0000 UTC m=+783.251008851" lastFinishedPulling="2025-12-09 14:25:26.818482184 +0000 UTC m=+809.743764431" observedRunningTime="2025-12-09 14:25:39.047314769 +0000 UTC m=+821.972597036" watchObservedRunningTime="2025-12-09 14:25:39.050057445 +0000 UTC m=+821.975339692" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.081266 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-hqwqw" podStartSLOduration=4.279900266 podStartE2EDuration="42.081246736s" podCreationTimestamp="2025-12-09 14:24:57 +0000 UTC" firstStartedPulling="2025-12-09 14:25:00.302998989 +0000 UTC m=+783.228281236" lastFinishedPulling="2025-12-09 14:25:38.104345459 +0000 UTC m=+821.029627706" observedRunningTime="2025-12-09 14:25:39.077806657 +0000 UTC m=+822.003088924" watchObservedRunningTime="2025-12-09 14:25:39.081246736 +0000 UTC m=+822.006528983" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.116002 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tpkl8"] Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.123417 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-tpkl8"] Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.133178 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-sxl6t" podStartSLOduration=3.384274833 podStartE2EDuration="43.133161527s" podCreationTimestamp="2025-12-09 14:24:56 +0000 UTC" firstStartedPulling="2025-12-09 14:24:58.341589398 +0000 UTC m=+781.266871645" lastFinishedPulling="2025-12-09 14:25:38.090476092 +0000 UTC m=+821.015758339" observedRunningTime="2025-12-09 14:25:39.127184719 +0000 UTC m=+822.052466966" watchObservedRunningTime="2025-12-09 14:25:39.133161527 +0000 UTC m=+822.058443774" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.154568 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-7bbcfd86d7-xv7xt" podStartSLOduration=2.509190254 podStartE2EDuration="36.154545729s" podCreationTimestamp="2025-12-09 14:25:03 +0000 UTC" firstStartedPulling="2025-12-09 14:25:04.434436072 +0000 UTC m=+787.359718319" lastFinishedPulling="2025-12-09 14:25:38.079791547 +0000 UTC m=+821.005073794" observedRunningTime="2025-12-09 14:25:39.147098786 +0000 UTC m=+822.072381043" watchObservedRunningTime="2025-12-09 14:25:39.154545729 +0000 UTC m=+822.079827986" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.491127 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.491747 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerName="extract-content" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.491763 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerName="extract-content" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.491775 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerName="registry-server" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.491782 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerName="registry-server" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.491791 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerName="extract-utilities" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.491797 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerName="extract-utilities" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.491809 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f277bd6-ea48-4729-960f-5a2b97bbfecc" containerName="registry" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.491814 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f277bd6-ea48-4729-960f-5a2b97bbfecc" containerName="registry" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.491926 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="8883851f-49c8-4275-a8b5-90f065c14dbd" containerName="registry-server" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.491939 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f277bd6-ea48-4729-960f-5a2b97bbfecc" containerName="registry" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.518188 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.518486 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.520508 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.528681 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.528790 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-mbtwh\"" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.529052 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.529100 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.529110 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.529176 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.529273 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.529325 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.615151 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.615211 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.615242 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.615271 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.615299 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.615537 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.615598 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.615654 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.615747 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.615793 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.615869 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.615930 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c190e447-0fde-4640-ac4d-f68a5351ab58-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.616017 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.616104 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.616164 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.717634 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.717731 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.717768 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.717799 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.717827 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718160 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718227 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718340 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718394 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718431 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718454 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718360 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718625 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718695 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718723 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718764 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718796 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c190e447-0fde-4640-ac4d-f68a5351ab58-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718865 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.718765 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.719260 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.719296 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.719745 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.726465 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.726526 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c190e447-0fde-4640-ac4d-f68a5351ab58-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.730160 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.730309 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c190e447-0fde-4640-ac4d-f68a5351ab58-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.730451 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.743722 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.745457 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.745504 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c190e447-0fde-4640-ac4d-f68a5351ab58-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c190e447-0fde-4640-ac4d-f68a5351ab58\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.878660 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f277bd6-ea48-4729-960f-5a2b97bbfecc" path="/var/lib/kubelet/pods/3f277bd6-ea48-4729-960f-5a2b97bbfecc/volumes" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.879823 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8883851f-49c8-4275-a8b5-90f065c14dbd" path="/var/lib/kubelet/pods/8883851f-49c8-4275-a8b5-90f065c14dbd/volumes" Dec 09 14:25:39 crc kubenswrapper[5173]: I1209 14:25:39.886284 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:25:40 crc kubenswrapper[5173]: I1209 14:25:40.030040 5173 generic.go:358] "Generic (PLEG): container finished" podID="232e3462-a5e6-4098-b4bd-018cba0b4444" containerID="c3f8bf5110255a7719d279b75349e8e647c6aabe7043010f1e45a7b46bcaaab8" exitCode=0 Dec 09 14:25:40 crc kubenswrapper[5173]: I1209 14:25:40.030268 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" event={"ID":"232e3462-a5e6-4098-b4bd-018cba0b4444","Type":"ContainerDied","Data":"c3f8bf5110255a7719d279b75349e8e647c6aabe7043010f1e45a7b46bcaaab8"} Dec 09 14:25:40 crc kubenswrapper[5173]: I1209 14:25:40.254506 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 14:25:41 crc kubenswrapper[5173]: I1209 14:25:41.042610 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c190e447-0fde-4640-ac4d-f68a5351ab58","Type":"ContainerStarted","Data":"bb5a3d1be25ca66a1e4cd51dfbfd98b1e417a9faae75d8131f17ec8bc899f984"} Dec 09 14:25:41 crc kubenswrapper[5173]: I1209 14:25:41.361390 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:25:41 crc kubenswrapper[5173]: I1209 14:25:41.441274 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/232e3462-a5e6-4098-b4bd-018cba0b4444-bundle\") pod \"232e3462-a5e6-4098-b4bd-018cba0b4444\" (UID: \"232e3462-a5e6-4098-b4bd-018cba0b4444\") " Dec 09 14:25:41 crc kubenswrapper[5173]: I1209 14:25:41.441443 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snmxx\" (UniqueName: \"kubernetes.io/projected/232e3462-a5e6-4098-b4bd-018cba0b4444-kube-api-access-snmxx\") pod \"232e3462-a5e6-4098-b4bd-018cba0b4444\" (UID: \"232e3462-a5e6-4098-b4bd-018cba0b4444\") " Dec 09 14:25:41 crc kubenswrapper[5173]: I1209 14:25:41.441539 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/232e3462-a5e6-4098-b4bd-018cba0b4444-util\") pod \"232e3462-a5e6-4098-b4bd-018cba0b4444\" (UID: \"232e3462-a5e6-4098-b4bd-018cba0b4444\") " Dec 09 14:25:41 crc kubenswrapper[5173]: I1209 14:25:41.443662 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232e3462-a5e6-4098-b4bd-018cba0b4444-bundle" (OuterVolumeSpecName: "bundle") pod "232e3462-a5e6-4098-b4bd-018cba0b4444" (UID: "232e3462-a5e6-4098-b4bd-018cba0b4444"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:25:41 crc kubenswrapper[5173]: I1209 14:25:41.452484 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232e3462-a5e6-4098-b4bd-018cba0b4444-util" (OuterVolumeSpecName: "util") pod "232e3462-a5e6-4098-b4bd-018cba0b4444" (UID: "232e3462-a5e6-4098-b4bd-018cba0b4444"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:25:41 crc kubenswrapper[5173]: I1209 14:25:41.455720 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232e3462-a5e6-4098-b4bd-018cba0b4444-kube-api-access-snmxx" (OuterVolumeSpecName: "kube-api-access-snmxx") pod "232e3462-a5e6-4098-b4bd-018cba0b4444" (UID: "232e3462-a5e6-4098-b4bd-018cba0b4444"). InnerVolumeSpecName "kube-api-access-snmxx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:25:41 crc kubenswrapper[5173]: I1209 14:25:41.543525 5173 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/232e3462-a5e6-4098-b4bd-018cba0b4444-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:41 crc kubenswrapper[5173]: I1209 14:25:41.543559 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-snmxx\" (UniqueName: \"kubernetes.io/projected/232e3462-a5e6-4098-b4bd-018cba0b4444-kube-api-access-snmxx\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:41 crc kubenswrapper[5173]: I1209 14:25:41.543574 5173 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/232e3462-a5e6-4098-b4bd-018cba0b4444-util\") on node \"crc\" DevicePath \"\"" Dec 09 14:25:42 crc kubenswrapper[5173]: I1209 14:25:42.056773 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" event={"ID":"232e3462-a5e6-4098-b4bd-018cba0b4444","Type":"ContainerDied","Data":"8ac824694f442ebd10794fe54048ace7122cd427bdd8fcbf3c3cceff14f332c6"} Dec 09 14:25:42 crc kubenswrapper[5173]: I1209 14:25:42.057178 5173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ac824694f442ebd10794fe54048ace7122cd427bdd8fcbf3c3cceff14f332c6" Dec 09 14:25:42 crc kubenswrapper[5173]: I1209 14:25:42.056826 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ah5bmr" Dec 09 14:25:49 crc kubenswrapper[5173]: I1209 14:25:49.084873 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:25:49 crc kubenswrapper[5173]: I1209 14:25:49.085131 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:25:49 crc kubenswrapper[5173]: I1209 14:25:49.085173 5173 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:25:49 crc kubenswrapper[5173]: I1209 14:25:49.085717 5173 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"859cb3132f564d2a8f9a55f99e30a3d865a9afbcb1dbb53a0523762f86be0540"} pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:25:49 crc kubenswrapper[5173]: I1209 14:25:49.085771 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" containerID="cri-o://859cb3132f564d2a8f9a55f99e30a3d865a9afbcb1dbb53a0523762f86be0540" gracePeriod=600 Dec 09 14:25:50 crc kubenswrapper[5173]: I1209 14:25:50.065977 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-2mdk2" Dec 09 14:25:50 crc kubenswrapper[5173]: I1209 14:25:50.120717 5173 generic.go:358] "Generic (PLEG): container finished" podID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerID="859cb3132f564d2a8f9a55f99e30a3d865a9afbcb1dbb53a0523762f86be0540" exitCode=0 Dec 09 14:25:50 crc kubenswrapper[5173]: I1209 14:25:50.120794 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerDied","Data":"859cb3132f564d2a8f9a55f99e30a3d865a9afbcb1dbb53a0523762f86be0540"} Dec 09 14:25:50 crc kubenswrapper[5173]: I1209 14:25:50.120857 5173 scope.go:117] "RemoveContainer" containerID="93d3de927b38141662865320582f39fe7791933ed43554feef05eeb6852d67b1" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.674118 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q"] Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.675030 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="232e3462-a5e6-4098-b4bd-018cba0b4444" containerName="pull" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.675045 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="232e3462-a5e6-4098-b4bd-018cba0b4444" containerName="pull" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.675078 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="232e3462-a5e6-4098-b4bd-018cba0b4444" containerName="util" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.675084 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="232e3462-a5e6-4098-b4bd-018cba0b4444" containerName="util" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.675101 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="232e3462-a5e6-4098-b4bd-018cba0b4444" containerName="extract" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.675107 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="232e3462-a5e6-4098-b4bd-018cba0b4444" containerName="extract" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.675190 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="232e3462-a5e6-4098-b4bd-018cba0b4444" containerName="extract" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.681695 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.688166 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-gtg42\"" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.688247 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.688499 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bc86\" (UniqueName: \"kubernetes.io/projected/4847cf66-d482-44a5-884e-d6e5f1d67e8c-kube-api-access-6bc86\") pod \"cert-manager-operator-controller-manager-64c74584c4-9kz2q\" (UID: \"4847cf66-d482-44a5-884e-d6e5f1d67e8c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.688547 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4847cf66-d482-44a5-884e-d6e5f1d67e8c-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-9kz2q\" (UID: \"4847cf66-d482-44a5-884e-d6e5f1d67e8c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.688584 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.708539 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q"] Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.789459 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4847cf66-d482-44a5-884e-d6e5f1d67e8c-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-9kz2q\" (UID: \"4847cf66-d482-44a5-884e-d6e5f1d67e8c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.789572 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6bc86\" (UniqueName: \"kubernetes.io/projected/4847cf66-d482-44a5-884e-d6e5f1d67e8c-kube-api-access-6bc86\") pod \"cert-manager-operator-controller-manager-64c74584c4-9kz2q\" (UID: \"4847cf66-d482-44a5-884e-d6e5f1d67e8c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.790073 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4847cf66-d482-44a5-884e-d6e5f1d67e8c-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-9kz2q\" (UID: \"4847cf66-d482-44a5-884e-d6e5f1d67e8c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q" Dec 09 14:25:52 crc kubenswrapper[5173]: I1209 14:25:52.901507 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bc86\" (UniqueName: \"kubernetes.io/projected/4847cf66-d482-44a5-884e-d6e5f1d67e8c-kube-api-access-6bc86\") pod \"cert-manager-operator-controller-manager-64c74584c4-9kz2q\" (UID: \"4847cf66-d482-44a5-884e-d6e5f1d67e8c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q" Dec 09 14:25:53 crc kubenswrapper[5173]: I1209 14:25:53.009754 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q" Dec 09 14:25:55 crc kubenswrapper[5173]: I1209 14:25:55.319848 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.491420 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.493555 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.493641 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-qgrnl\"" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.494684 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.494865 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.513631 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.565897 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7lx4\" (UniqueName: \"kubernetes.io/projected/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-kube-api-access-z7lx4\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.566236 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.566260 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.566277 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.566311 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.566397 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.566423 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.566475 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.566508 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.566533 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.566552 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.566573 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.667494 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.667560 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.667583 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.667611 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.667650 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z7lx4\" (UniqueName: \"kubernetes.io/projected/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-kube-api-access-z7lx4\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.667702 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.667732 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.667751 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.667787 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.667828 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.667857 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.667904 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.668264 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.668307 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.668461 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.668515 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.668667 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.668832 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.670109 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.670687 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.674331 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.676919 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.676956 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.686091 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7lx4\" (UniqueName: \"kubernetes.io/projected/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-kube-api-access-z7lx4\") pod \"service-telemetry-operator-1-build\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:56 crc kubenswrapper[5173]: I1209 14:25:56.865714 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:25:59 crc kubenswrapper[5173]: I1209 14:25:59.170765 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 14:25:59 crc kubenswrapper[5173]: W1209 14:25:59.185499 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a0b0488_af83_49f4_a18a_3b70e04aaaf2.slice/crio-fd7baec73873834e4745cea55939796ebdb83cebae6178eaa1a2d689c44c891b WatchSource:0}: Error finding container fd7baec73873834e4745cea55939796ebdb83cebae6178eaa1a2d689c44c891b: Status 404 returned error can't find the container with id fd7baec73873834e4745cea55939796ebdb83cebae6178eaa1a2d689c44c891b Dec 09 14:25:59 crc kubenswrapper[5173]: I1209 14:25:59.201820 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"7a0b0488-af83-49f4-a18a-3b70e04aaaf2","Type":"ContainerStarted","Data":"fd7baec73873834e4745cea55939796ebdb83cebae6178eaa1a2d689c44c891b"} Dec 09 14:25:59 crc kubenswrapper[5173]: I1209 14:25:59.278305 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q"] Dec 09 14:25:59 crc kubenswrapper[5173]: W1209 14:25:59.284752 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4847cf66_d482_44a5_884e_d6e5f1d67e8c.slice/crio-f47ee1cd05c9ac871d3bc4e4f3ce8a690a460f543324f70fcb9ab0707c8b32db WatchSource:0}: Error finding container f47ee1cd05c9ac871d3bc4e4f3ce8a690a460f543324f70fcb9ab0707c8b32db: Status 404 returned error can't find the container with id f47ee1cd05c9ac871d3bc4e4f3ce8a690a460f543324f70fcb9ab0707c8b32db Dec 09 14:26:00 crc kubenswrapper[5173]: I1209 14:26:00.209901 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c190e447-0fde-4640-ac4d-f68a5351ab58","Type":"ContainerStarted","Data":"6b4e1973f24fa2934de2e8ec185a414525813a44abe24829f4c2a0f36195d961"} Dec 09 14:26:00 crc kubenswrapper[5173]: I1209 14:26:00.213365 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q" event={"ID":"4847cf66-d482-44a5-884e-d6e5f1d67e8c","Type":"ContainerStarted","Data":"f47ee1cd05c9ac871d3bc4e4f3ce8a690a460f543324f70fcb9ab0707c8b32db"} Dec 09 14:26:00 crc kubenswrapper[5173]: I1209 14:26:00.216447 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerStarted","Data":"ec21fae24ed5b475fc335cf81728357994814b2a3f96c37e355ce8993f76f7cf"} Dec 09 14:26:00 crc kubenswrapper[5173]: I1209 14:26:00.373308 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 14:26:00 crc kubenswrapper[5173]: I1209 14:26:00.406149 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 09 14:26:02 crc kubenswrapper[5173]: I1209 14:26:02.229492 5173 generic.go:358] "Generic (PLEG): container finished" podID="c190e447-0fde-4640-ac4d-f68a5351ab58" containerID="6b4e1973f24fa2934de2e8ec185a414525813a44abe24829f4c2a0f36195d961" exitCode=0 Dec 09 14:26:02 crc kubenswrapper[5173]: I1209 14:26:02.229603 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c190e447-0fde-4640-ac4d-f68a5351ab58","Type":"ContainerDied","Data":"6b4e1973f24fa2934de2e8ec185a414525813a44abe24829f4c2a0f36195d961"} Dec 09 14:26:05 crc kubenswrapper[5173]: I1209 14:26:05.261487 5173 generic.go:358] "Generic (PLEG): container finished" podID="c190e447-0fde-4640-ac4d-f68a5351ab58" containerID="0eee101b023a50fc15fc9a9e148feb42a32b61deb974647c32a4661e703a7c0a" exitCode=0 Dec 09 14:26:05 crc kubenswrapper[5173]: I1209 14:26:05.261552 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c190e447-0fde-4640-ac4d-f68a5351ab58","Type":"ContainerDied","Data":"0eee101b023a50fc15fc9a9e148feb42a32b61deb974647c32a4661e703a7c0a"} Dec 09 14:26:05 crc kubenswrapper[5173]: I1209 14:26:05.759665 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 14:26:06 crc kubenswrapper[5173]: I1209 14:26:06.275501 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c190e447-0fde-4640-ac4d-f68a5351ab58","Type":"ContainerStarted","Data":"e071cd052bece6764c62c4c747e23dddc30afae54a25a40f70b69c72a441a82c"} Dec 09 14:26:06 crc kubenswrapper[5173]: I1209 14:26:06.275653 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.285390 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q" event={"ID":"4847cf66-d482-44a5-884e-d6e5f1d67e8c","Type":"ContainerStarted","Data":"658d8c48238038878eee845be17687eae45b4b06279e0e29cd588ff4e29dada1"} Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.306942 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-9kz2q" podStartSLOduration=7.923896699 podStartE2EDuration="15.306922489s" podCreationTimestamp="2025-12-09 14:25:52 +0000 UTC" firstStartedPulling="2025-12-09 14:25:59.287981228 +0000 UTC m=+842.213263485" lastFinishedPulling="2025-12-09 14:26:06.671007018 +0000 UTC m=+849.596289275" observedRunningTime="2025-12-09 14:26:07.300213798 +0000 UTC m=+850.225496055" watchObservedRunningTime="2025-12-09 14:26:07.306922489 +0000 UTC m=+850.232204736" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.307400 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=9.362744805 podStartE2EDuration="28.307396014s" podCreationTimestamp="2025-12-09 14:25:39 +0000 UTC" firstStartedPulling="2025-12-09 14:25:40.265665287 +0000 UTC m=+823.190947534" lastFinishedPulling="2025-12-09 14:25:59.210316496 +0000 UTC m=+842.135598743" observedRunningTime="2025-12-09 14:26:06.315629577 +0000 UTC m=+849.240911834" watchObservedRunningTime="2025-12-09 14:26:07.307396014 +0000 UTC m=+850.232678261" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.448345 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.454757 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.458027 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.458396 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.459020 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.460972 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.537235 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/4efd88b5-7256-43e0-a227-c6ca79e8ec01-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.537295 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.537322 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.537369 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4efd88b5-7256-43e0-a227-c6ca79e8ec01-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.537399 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.537422 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/4efd88b5-7256-43e0-a227-c6ca79e8ec01-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.537437 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.537458 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4efd88b5-7256-43e0-a227-c6ca79e8ec01-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.537489 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.537507 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.537531 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnvpc\" (UniqueName: \"kubernetes.io/projected/4efd88b5-7256-43e0-a227-c6ca79e8ec01-kube-api-access-fnvpc\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.537563 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640048 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640113 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4efd88b5-7256-43e0-a227-c6ca79e8ec01-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640147 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640231 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4efd88b5-7256-43e0-a227-c6ca79e8ec01-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640298 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/4efd88b5-7256-43e0-a227-c6ca79e8ec01-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640366 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640397 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4efd88b5-7256-43e0-a227-c6ca79e8ec01-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640443 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640466 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640510 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fnvpc\" (UniqueName: \"kubernetes.io/projected/4efd88b5-7256-43e0-a227-c6ca79e8ec01-kube-api-access-fnvpc\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640533 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640591 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640631 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/4efd88b5-7256-43e0-a227-c6ca79e8ec01-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640581 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4efd88b5-7256-43e0-a227-c6ca79e8ec01-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640724 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640768 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.640858 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.641019 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.641612 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.641784 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.642298 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.648587 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/4efd88b5-7256-43e0-a227-c6ca79e8ec01-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.652936 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/4efd88b5-7256-43e0-a227-c6ca79e8ec01-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.664751 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnvpc\" (UniqueName: \"kubernetes.io/projected/4efd88b5-7256-43e0-a227-c6ca79e8ec01-kube-api-access-fnvpc\") pod \"service-telemetry-operator-2-build\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:07 crc kubenswrapper[5173]: I1209 14:26:07.891075 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.394700 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b"] Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.408950 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b"] Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.409450 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.411993 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.412229 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.412396 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-9wd65\"" Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.467571 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5r4r\" (UniqueName: \"kubernetes.io/projected/96a44790-ee4a-43d7-9ab0-828f9939c2b5-kube-api-access-d5r4r\") pod \"cert-manager-webhook-7894b5b9b4-vqw6b\" (UID: \"96a44790-ee4a-43d7-9ab0-828f9939c2b5\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.467726 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96a44790-ee4a-43d7-9ab0-828f9939c2b5-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-vqw6b\" (UID: \"96a44790-ee4a-43d7-9ab0-828f9939c2b5\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.568731 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d5r4r\" (UniqueName: \"kubernetes.io/projected/96a44790-ee4a-43d7-9ab0-828f9939c2b5-kube-api-access-d5r4r\") pod \"cert-manager-webhook-7894b5b9b4-vqw6b\" (UID: \"96a44790-ee4a-43d7-9ab0-828f9939c2b5\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.569139 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96a44790-ee4a-43d7-9ab0-828f9939c2b5-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-vqw6b\" (UID: \"96a44790-ee4a-43d7-9ab0-828f9939c2b5\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.596489 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96a44790-ee4a-43d7-9ab0-828f9939c2b5-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-vqw6b\" (UID: \"96a44790-ee4a-43d7-9ab0-828f9939c2b5\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.596593 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5r4r\" (UniqueName: \"kubernetes.io/projected/96a44790-ee4a-43d7-9ab0-828f9939c2b5-kube-api-access-d5r4r\") pod \"cert-manager-webhook-7894b5b9b4-vqw6b\" (UID: \"96a44790-ee4a-43d7-9ab0-828f9939c2b5\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" Dec 09 14:26:09 crc kubenswrapper[5173]: I1209 14:26:09.726779 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" Dec 09 14:26:12 crc kubenswrapper[5173]: I1209 14:26:12.760855 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b"] Dec 09 14:26:12 crc kubenswrapper[5173]: W1209 14:26:12.763021 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96a44790_ee4a_43d7_9ab0_828f9939c2b5.slice/crio-4a9ac2bbc4095200298a7a09029f0c611a81a7c72aba83f479d1c4482dba0091 WatchSource:0}: Error finding container 4a9ac2bbc4095200298a7a09029f0c611a81a7c72aba83f479d1c4482dba0091: Status 404 returned error can't find the container with id 4a9ac2bbc4095200298a7a09029f0c611a81a7c72aba83f479d1c4482dba0091 Dec 09 14:26:12 crc kubenswrapper[5173]: I1209 14:26:12.785232 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 14:26:12 crc kubenswrapper[5173]: W1209 14:26:12.791893 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4efd88b5_7256_43e0_a227_c6ca79e8ec01.slice/crio-7a3ea7d2ee5be28c664bdbef6095c7c7ce31c22117d95586b8bb6949f436dff5 WatchSource:0}: Error finding container 7a3ea7d2ee5be28c664bdbef6095c7c7ce31c22117d95586b8bb6949f436dff5: Status 404 returned error can't find the container with id 7a3ea7d2ee5be28c664bdbef6095c7c7ce31c22117d95586b8bb6949f436dff5 Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.132385 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr"] Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.176125 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr" Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.182240 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-hckn2\"" Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.211330 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr"] Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.232132 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4df7e70b-0991-424f-b261-bd498653f853-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-2nzbr\" (UID: \"4df7e70b-0991-424f-b261-bd498653f853\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr" Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.232541 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmwcz\" (UniqueName: \"kubernetes.io/projected/4df7e70b-0991-424f-b261-bd498653f853-kube-api-access-pmwcz\") pod \"cert-manager-cainjector-7dbf76d5c8-2nzbr\" (UID: \"4df7e70b-0991-424f-b261-bd498653f853\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr" Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.333984 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4df7e70b-0991-424f-b261-bd498653f853-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-2nzbr\" (UID: \"4df7e70b-0991-424f-b261-bd498653f853\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr" Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.334028 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pmwcz\" (UniqueName: \"kubernetes.io/projected/4df7e70b-0991-424f-b261-bd498653f853-kube-api-access-pmwcz\") pod \"cert-manager-cainjector-7dbf76d5c8-2nzbr\" (UID: \"4df7e70b-0991-424f-b261-bd498653f853\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr" Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.356019 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4df7e70b-0991-424f-b261-bd498653f853-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-2nzbr\" (UID: \"4df7e70b-0991-424f-b261-bd498653f853\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr" Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.362879 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"4efd88b5-7256-43e0-a227-c6ca79e8ec01","Type":"ContainerStarted","Data":"7a3ea7d2ee5be28c664bdbef6095c7c7ce31c22117d95586b8bb6949f436dff5"} Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.364509 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" event={"ID":"96a44790-ee4a-43d7-9ab0-828f9939c2b5","Type":"ContainerStarted","Data":"4a9ac2bbc4095200298a7a09029f0c611a81a7c72aba83f479d1c4482dba0091"} Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.371118 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmwcz\" (UniqueName: \"kubernetes.io/projected/4df7e70b-0991-424f-b261-bd498653f853-kube-api-access-pmwcz\") pod \"cert-manager-cainjector-7dbf76d5c8-2nzbr\" (UID: \"4df7e70b-0991-424f-b261-bd498653f853\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr" Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.545601 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr" Dec 09 14:26:13 crc kubenswrapper[5173]: I1209 14:26:13.775795 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr"] Dec 09 14:26:13 crc kubenswrapper[5173]: W1209 14:26:13.777821 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4df7e70b_0991_424f_b261_bd498653f853.slice/crio-4accf655ef0f795544e1fba0f8408771c5b8c09dc64c7044e720c1c8f060ed81 WatchSource:0}: Error finding container 4accf655ef0f795544e1fba0f8408771c5b8c09dc64c7044e720c1c8f060ed81: Status 404 returned error can't find the container with id 4accf655ef0f795544e1fba0f8408771c5b8c09dc64c7044e720c1c8f060ed81 Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.374752 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"7a0b0488-af83-49f4-a18a-3b70e04aaaf2","Type":"ContainerStarted","Data":"f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640"} Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.374828 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="7a0b0488-af83-49f4-a18a-3b70e04aaaf2" containerName="manage-dockerfile" containerID="cri-o://f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640" gracePeriod=30 Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.378529 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"4efd88b5-7256-43e0-a227-c6ca79e8ec01","Type":"ContainerStarted","Data":"0dc638dcafa3d83dc1e64bf93596cb7c43d9f1ce712972b833aaf7ab5c25ba80"} Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.380030 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr" event={"ID":"4df7e70b-0991-424f-b261-bd498653f853","Type":"ContainerStarted","Data":"4accf655ef0f795544e1fba0f8408771c5b8c09dc64c7044e720c1c8f060ed81"} Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.464777 5173 ???:1] "http: TLS handshake error from 192.168.126.11:55448: no serving certificate available for the kubelet" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.869761 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_7a0b0488-af83-49f4-a18a-3b70e04aaaf2/manage-dockerfile/0.log" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.869851 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.958087 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-proxy-ca-bundles\") pod \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.958132 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-buildcachedir\") pod \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.958154 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-node-pullsecrets\") pod \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.958218 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-container-storage-root\") pod \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.958234 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7lx4\" (UniqueName: \"kubernetes.io/projected/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-kube-api-access-z7lx4\") pod \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.958757 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7a0b0488-af83-49f4-a18a-3b70e04aaaf2" (UID: "7a0b0488-af83-49f4-a18a-3b70e04aaaf2"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.958829 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-container-storage-run\") pod \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.958873 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-blob-cache\") pod \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.958908 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-ca-bundles\") pod \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.958982 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-builder-dockercfg-qgrnl-push\") pod \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.959011 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-builder-dockercfg-qgrnl-pull\") pod \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.959038 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7a0b0488-af83-49f4-a18a-3b70e04aaaf2" (UID: "7a0b0488-af83-49f4-a18a-3b70e04aaaf2"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.959073 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-system-configs\") pod \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.959169 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7a0b0488-af83-49f4-a18a-3b70e04aaaf2" (UID: "7a0b0488-af83-49f4-a18a-3b70e04aaaf2"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.959782 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-buildworkdir\") pod \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\" (UID: \"7a0b0488-af83-49f4-a18a-3b70e04aaaf2\") " Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.959843 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7a0b0488-af83-49f4-a18a-3b70e04aaaf2" (UID: "7a0b0488-af83-49f4-a18a-3b70e04aaaf2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.959881 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7a0b0488-af83-49f4-a18a-3b70e04aaaf2" (UID: "7a0b0488-af83-49f4-a18a-3b70e04aaaf2"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.959887 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7a0b0488-af83-49f4-a18a-3b70e04aaaf2" (UID: "7a0b0488-af83-49f4-a18a-3b70e04aaaf2"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.960252 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7a0b0488-af83-49f4-a18a-3b70e04aaaf2" (UID: "7a0b0488-af83-49f4-a18a-3b70e04aaaf2"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.960274 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7a0b0488-af83-49f4-a18a-3b70e04aaaf2" (UID: "7a0b0488-af83-49f4-a18a-3b70e04aaaf2"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.960478 5173 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.960498 5173 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.960511 5173 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.960526 5173 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.960529 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7a0b0488-af83-49f4-a18a-3b70e04aaaf2" (UID: "7a0b0488-af83-49f4-a18a-3b70e04aaaf2"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.960538 5173 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.960550 5173 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.960561 5173 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.960575 5173 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.965176 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-builder-dockercfg-qgrnl-pull" (OuterVolumeSpecName: "builder-dockercfg-qgrnl-pull") pod "7a0b0488-af83-49f4-a18a-3b70e04aaaf2" (UID: "7a0b0488-af83-49f4-a18a-3b70e04aaaf2"). InnerVolumeSpecName "builder-dockercfg-qgrnl-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.965274 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-kube-api-access-z7lx4" (OuterVolumeSpecName: "kube-api-access-z7lx4") pod "7a0b0488-af83-49f4-a18a-3b70e04aaaf2" (UID: "7a0b0488-af83-49f4-a18a-3b70e04aaaf2"). InnerVolumeSpecName "kube-api-access-z7lx4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:26:14 crc kubenswrapper[5173]: I1209 14:26:14.966528 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-builder-dockercfg-qgrnl-push" (OuterVolumeSpecName: "builder-dockercfg-qgrnl-push") pod "7a0b0488-af83-49f4-a18a-3b70e04aaaf2" (UID: "7a0b0488-af83-49f4-a18a-3b70e04aaaf2"). InnerVolumeSpecName "builder-dockercfg-qgrnl-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.062309 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z7lx4\" (UniqueName: \"kubernetes.io/projected/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-kube-api-access-z7lx4\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.062363 5173 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.062377 5173 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-builder-dockercfg-qgrnl-push\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.062393 5173 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/7a0b0488-af83-49f4-a18a-3b70e04aaaf2-builder-dockercfg-qgrnl-pull\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.394624 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_7a0b0488-af83-49f4-a18a-3b70e04aaaf2/manage-dockerfile/0.log" Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.394672 5173 generic.go:358] "Generic (PLEG): container finished" podID="7a0b0488-af83-49f4-a18a-3b70e04aaaf2" containerID="f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640" exitCode=1 Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.394759 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.394831 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"7a0b0488-af83-49f4-a18a-3b70e04aaaf2","Type":"ContainerDied","Data":"f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640"} Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.394888 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"7a0b0488-af83-49f4-a18a-3b70e04aaaf2","Type":"ContainerDied","Data":"fd7baec73873834e4745cea55939796ebdb83cebae6178eaa1a2d689c44c891b"} Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.394907 5173 scope.go:117] "RemoveContainer" containerID="f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640" Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.425117 5173 scope.go:117] "RemoveContainer" containerID="f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640" Dec 09 14:26:15 crc kubenswrapper[5173]: E1209 14:26:15.425719 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640\": container with ID starting with f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640 not found: ID does not exist" containerID="f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640" Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.425761 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640"} err="failed to get container status \"f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640\": rpc error: code = NotFound desc = could not find container \"f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640\": container with ID starting with f793a3f96efac3e360814233241d96fc39a86f47d57f66a4461eb989c3732640 not found: ID does not exist" Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.434993 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.439253 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.497185 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 14:26:15 crc kubenswrapper[5173]: I1209 14:26:15.961634 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a0b0488-af83-49f4-a18a-3b70e04aaaf2" path="/var/lib/kubelet/pods/7a0b0488-af83-49f4-a18a-3b70e04aaaf2/volumes" Dec 09 14:26:16 crc kubenswrapper[5173]: I1209 14:26:16.475689 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="4efd88b5-7256-43e0-a227-c6ca79e8ec01" containerName="git-clone" containerID="cri-o://0dc638dcafa3d83dc1e64bf93596cb7c43d9f1ce712972b833aaf7ab5c25ba80" gracePeriod=30 Dec 09 14:26:17 crc kubenswrapper[5173]: I1209 14:26:17.377467 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="c190e447-0fde-4640-ac4d-f68a5351ab58" containerName="elasticsearch" probeResult="failure" output=< Dec 09 14:26:17 crc kubenswrapper[5173]: {"timestamp": "2025-12-09T14:26:17+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 09 14:26:17 crc kubenswrapper[5173]: > Dec 09 14:26:17 crc kubenswrapper[5173]: I1209 14:26:17.491670 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_4efd88b5-7256-43e0-a227-c6ca79e8ec01/git-clone/0.log" Dec 09 14:26:17 crc kubenswrapper[5173]: I1209 14:26:17.491710 5173 generic.go:358] "Generic (PLEG): container finished" podID="4efd88b5-7256-43e0-a227-c6ca79e8ec01" containerID="0dc638dcafa3d83dc1e64bf93596cb7c43d9f1ce712972b833aaf7ab5c25ba80" exitCode=1 Dec 09 14:26:17 crc kubenswrapper[5173]: I1209 14:26:17.491798 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"4efd88b5-7256-43e0-a227-c6ca79e8ec01","Type":"ContainerDied","Data":"0dc638dcafa3d83dc1e64bf93596cb7c43d9f1ce712972b833aaf7ab5c25ba80"} Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.428410 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_4efd88b5-7256-43e0-a227-c6ca79e8ec01/git-clone/0.log" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.428948 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.506281 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-container-storage-run\") pod \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.506383 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/4efd88b5-7256-43e0-a227-c6ca79e8ec01-builder-dockercfg-qgrnl-pull\") pod \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.506417 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-system-configs\") pod \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.507426 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "4efd88b5-7256-43e0-a227-c6ca79e8ec01" (UID: "4efd88b5-7256-43e0-a227-c6ca79e8ec01"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.506645 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/4efd88b5-7256-43e0-a227-c6ca79e8ec01-builder-dockercfg-qgrnl-push\") pod \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.507556 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-blob-cache\") pod \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.507627 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-ca-bundles\") pod \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.507732 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-container-storage-root\") pod \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.507828 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-buildworkdir\") pod \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.507906 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnvpc\" (UniqueName: \"kubernetes.io/projected/4efd88b5-7256-43e0-a227-c6ca79e8ec01-kube-api-access-fnvpc\") pod \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.507962 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4efd88b5-7256-43e0-a227-c6ca79e8ec01-buildcachedir\") pod \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.507892 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "4efd88b5-7256-43e0-a227-c6ca79e8ec01" (UID: "4efd88b5-7256-43e0-a227-c6ca79e8ec01"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.508000 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "4efd88b5-7256-43e0-a227-c6ca79e8ec01" (UID: "4efd88b5-7256-43e0-a227-c6ca79e8ec01"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.508080 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-proxy-ca-bundles\") pod \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.508113 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4efd88b5-7256-43e0-a227-c6ca79e8ec01-node-pullsecrets\") pod \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\" (UID: \"4efd88b5-7256-43e0-a227-c6ca79e8ec01\") " Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.508335 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "4efd88b5-7256-43e0-a227-c6ca79e8ec01" (UID: "4efd88b5-7256-43e0-a227-c6ca79e8ec01"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.508541 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4efd88b5-7256-43e0-a227-c6ca79e8ec01-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "4efd88b5-7256-43e0-a227-c6ca79e8ec01" (UID: "4efd88b5-7256-43e0-a227-c6ca79e8ec01"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.508791 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "4efd88b5-7256-43e0-a227-c6ca79e8ec01" (UID: "4efd88b5-7256-43e0-a227-c6ca79e8ec01"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.508815 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4efd88b5-7256-43e0-a227-c6ca79e8ec01-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4efd88b5-7256-43e0-a227-c6ca79e8ec01" (UID: "4efd88b5-7256-43e0-a227-c6ca79e8ec01"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.508926 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "4efd88b5-7256-43e0-a227-c6ca79e8ec01" (UID: "4efd88b5-7256-43e0-a227-c6ca79e8ec01"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.509189 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "4efd88b5-7256-43e0-a227-c6ca79e8ec01" (UID: "4efd88b5-7256-43e0-a227-c6ca79e8ec01"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.511922 5173 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.511943 5173 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.511955 5173 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.511968 5173 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.513059 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efd88b5-7256-43e0-a227-c6ca79e8ec01-builder-dockercfg-qgrnl-push" (OuterVolumeSpecName: "builder-dockercfg-qgrnl-push") pod "4efd88b5-7256-43e0-a227-c6ca79e8ec01" (UID: "4efd88b5-7256-43e0-a227-c6ca79e8ec01"). InnerVolumeSpecName "builder-dockercfg-qgrnl-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.511980 5173 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.513462 5173 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4efd88b5-7256-43e0-a227-c6ca79e8ec01-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.513477 5173 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4efd88b5-7256-43e0-a227-c6ca79e8ec01-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.513468 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4efd88b5-7256-43e0-a227-c6ca79e8ec01-kube-api-access-fnvpc" (OuterVolumeSpecName: "kube-api-access-fnvpc") pod "4efd88b5-7256-43e0-a227-c6ca79e8ec01" (UID: "4efd88b5-7256-43e0-a227-c6ca79e8ec01"). InnerVolumeSpecName "kube-api-access-fnvpc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.513491 5173 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4efd88b5-7256-43e0-a227-c6ca79e8ec01-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.514841 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efd88b5-7256-43e0-a227-c6ca79e8ec01-builder-dockercfg-qgrnl-pull" (OuterVolumeSpecName: "builder-dockercfg-qgrnl-pull") pod "4efd88b5-7256-43e0-a227-c6ca79e8ec01" (UID: "4efd88b5-7256-43e0-a227-c6ca79e8ec01"). InnerVolumeSpecName "builder-dockercfg-qgrnl-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.515957 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_4efd88b5-7256-43e0-a227-c6ca79e8ec01/git-clone/0.log" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.516033 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"4efd88b5-7256-43e0-a227-c6ca79e8ec01","Type":"ContainerDied","Data":"7a3ea7d2ee5be28c664bdbef6095c7c7ce31c22117d95586b8bb6949f436dff5"} Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.516076 5173 scope.go:117] "RemoveContainer" containerID="0dc638dcafa3d83dc1e64bf93596cb7c43d9f1ce712972b833aaf7ab5c25ba80" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.516228 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.573174 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.580576 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.614641 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fnvpc\" (UniqueName: \"kubernetes.io/projected/4efd88b5-7256-43e0-a227-c6ca79e8ec01-kube-api-access-fnvpc\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.614668 5173 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4efd88b5-7256-43e0-a227-c6ca79e8ec01-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.614682 5173 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/4efd88b5-7256-43e0-a227-c6ca79e8ec01-builder-dockercfg-qgrnl-pull\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:20 crc kubenswrapper[5173]: I1209 14:26:20.614693 5173 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/4efd88b5-7256-43e0-a227-c6ca79e8ec01-builder-dockercfg-qgrnl-push\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:21 crc kubenswrapper[5173]: I1209 14:26:21.882032 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4efd88b5-7256-43e0-a227-c6ca79e8ec01" path="/var/lib/kubelet/pods/4efd88b5-7256-43e0-a227-c6ca79e8ec01/volumes" Dec 09 14:26:22 crc kubenswrapper[5173]: I1209 14:26:22.414304 5173 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="c190e447-0fde-4640-ac4d-f68a5351ab58" containerName="elasticsearch" probeResult="failure" output=< Dec 09 14:26:22 crc kubenswrapper[5173]: {"timestamp": "2025-12-09T14:26:22+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 09 14:26:22 crc kubenswrapper[5173]: > Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.545521 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" event={"ID":"96a44790-ee4a-43d7-9ab0-828f9939c2b5","Type":"ContainerStarted","Data":"2dca183db1274a59db56dfbc2903f1a69d540fc1857a8a504f59e8bc260a94a1"} Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.545981 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.548338 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr" event={"ID":"4df7e70b-0991-424f-b261-bd498653f853","Type":"ContainerStarted","Data":"4e5a38cee1a5aaa8a780e05860521c1d0ff5be9dbedbda0dc4332558c9a1e2c9"} Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.548396 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-k8msr"] Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.549382 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4efd88b5-7256-43e0-a227-c6ca79e8ec01" containerName="git-clone" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.549415 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efd88b5-7256-43e0-a227-c6ca79e8ec01" containerName="git-clone" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.549453 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a0b0488-af83-49f4-a18a-3b70e04aaaf2" containerName="manage-dockerfile" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.549464 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0b0488-af83-49f4-a18a-3b70e04aaaf2" containerName="manage-dockerfile" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.549631 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="7a0b0488-af83-49f4-a18a-3b70e04aaaf2" containerName="manage-dockerfile" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.549653 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="4efd88b5-7256-43e0-a227-c6ca79e8ec01" containerName="git-clone" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.572814 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" podStartSLOduration=4.445189036 podStartE2EDuration="15.572797606s" podCreationTimestamp="2025-12-09 14:26:09 +0000 UTC" firstStartedPulling="2025-12-09 14:26:12.76494881 +0000 UTC m=+855.690231047" lastFinishedPulling="2025-12-09 14:26:23.89255737 +0000 UTC m=+866.817839617" observedRunningTime="2025-12-09 14:26:24.569968956 +0000 UTC m=+867.495251213" watchObservedRunningTime="2025-12-09 14:26:24.572797606 +0000 UTC m=+867.498079843" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.588813 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2nzbr" podStartSLOduration=1.479043495 podStartE2EDuration="11.588797722s" podCreationTimestamp="2025-12-09 14:26:13 +0000 UTC" firstStartedPulling="2025-12-09 14:26:13.779929542 +0000 UTC m=+856.705211789" lastFinishedPulling="2025-12-09 14:26:23.889683769 +0000 UTC m=+866.814966016" observedRunningTime="2025-12-09 14:26:24.585136637 +0000 UTC m=+867.510418894" watchObservedRunningTime="2025-12-09 14:26:24.588797722 +0000 UTC m=+867.514079969" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.682656 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-k8msr"] Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.682804 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-k8msr" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.685038 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-9nvrg\"" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.736754 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl87w\" (UniqueName: \"kubernetes.io/projected/2a14be1e-86b6-4332-b238-2319e5c809d8-kube-api-access-pl87w\") pod \"cert-manager-858d87f86b-k8msr\" (UID: \"2a14be1e-86b6-4332-b238-2319e5c809d8\") " pod="cert-manager/cert-manager-858d87f86b-k8msr" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.736982 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a14be1e-86b6-4332-b238-2319e5c809d8-bound-sa-token\") pod \"cert-manager-858d87f86b-k8msr\" (UID: \"2a14be1e-86b6-4332-b238-2319e5c809d8\") " pod="cert-manager/cert-manager-858d87f86b-k8msr" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.838127 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pl87w\" (UniqueName: \"kubernetes.io/projected/2a14be1e-86b6-4332-b238-2319e5c809d8-kube-api-access-pl87w\") pod \"cert-manager-858d87f86b-k8msr\" (UID: \"2a14be1e-86b6-4332-b238-2319e5c809d8\") " pod="cert-manager/cert-manager-858d87f86b-k8msr" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.838189 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a14be1e-86b6-4332-b238-2319e5c809d8-bound-sa-token\") pod \"cert-manager-858d87f86b-k8msr\" (UID: \"2a14be1e-86b6-4332-b238-2319e5c809d8\") " pod="cert-manager/cert-manager-858d87f86b-k8msr" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.861950 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl87w\" (UniqueName: \"kubernetes.io/projected/2a14be1e-86b6-4332-b238-2319e5c809d8-kube-api-access-pl87w\") pod \"cert-manager-858d87f86b-k8msr\" (UID: \"2a14be1e-86b6-4332-b238-2319e5c809d8\") " pod="cert-manager/cert-manager-858d87f86b-k8msr" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.865193 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a14be1e-86b6-4332-b238-2319e5c809d8-bound-sa-token\") pod \"cert-manager-858d87f86b-k8msr\" (UID: \"2a14be1e-86b6-4332-b238-2319e5c809d8\") " pod="cert-manager/cert-manager-858d87f86b-k8msr" Dec 09 14:26:24 crc kubenswrapper[5173]: I1209 14:26:24.997572 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-k8msr" Dec 09 14:26:25 crc kubenswrapper[5173]: I1209 14:26:25.268687 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-k8msr"] Dec 09 14:26:25 crc kubenswrapper[5173]: W1209 14:26:25.273434 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a14be1e_86b6_4332_b238_2319e5c809d8.slice/crio-12adc7df8bd675c55d87ee0aae32fd10a2964130aed57390ca8f31db557f16c5 WatchSource:0}: Error finding container 12adc7df8bd675c55d87ee0aae32fd10a2964130aed57390ca8f31db557f16c5: Status 404 returned error can't find the container with id 12adc7df8bd675c55d87ee0aae32fd10a2964130aed57390ca8f31db557f16c5 Dec 09 14:26:25 crc kubenswrapper[5173]: I1209 14:26:25.556539 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-k8msr" event={"ID":"2a14be1e-86b6-4332-b238-2319e5c809d8","Type":"ContainerStarted","Data":"10913192a7e42e47af8ca53feba937b5f1334b1550a7ed0b488b4d90b002f5df"} Dec 09 14:26:25 crc kubenswrapper[5173]: I1209 14:26:25.556594 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-k8msr" event={"ID":"2a14be1e-86b6-4332-b238-2319e5c809d8","Type":"ContainerStarted","Data":"12adc7df8bd675c55d87ee0aae32fd10a2964130aed57390ca8f31db557f16c5"} Dec 09 14:26:25 crc kubenswrapper[5173]: I1209 14:26:25.581168 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-k8msr" podStartSLOduration=1.581146059 podStartE2EDuration="1.581146059s" podCreationTimestamp="2025-12-09 14:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 14:26:25.576869463 +0000 UTC m=+868.502151720" watchObservedRunningTime="2025-12-09 14:26:25.581146059 +0000 UTC m=+868.506428306" Dec 09 14:26:26 crc kubenswrapper[5173]: I1209 14:26:26.989638 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.082541 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.082742 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.086093 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-ca\"" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.086201 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-global-ca\"" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.087569 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-sys-config\"" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.088251 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-qgrnl\"" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.166695 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/80dd398f-e92c-4fb5-8cdb-08494b39d656-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.166775 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.166830 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.166936 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.166997 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/80dd398f-e92c-4fb5-8cdb-08494b39d656-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.167026 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.167086 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.167180 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.167221 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/80dd398f-e92c-4fb5-8cdb-08494b39d656-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.167385 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4w65\" (UniqueName: \"kubernetes.io/projected/80dd398f-e92c-4fb5-8cdb-08494b39d656-kube-api-access-m4w65\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.167484 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/80dd398f-e92c-4fb5-8cdb-08494b39d656-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.167525 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.268834 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/80dd398f-e92c-4fb5-8cdb-08494b39d656-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.268882 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.268901 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/80dd398f-e92c-4fb5-8cdb-08494b39d656-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.268915 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/80dd398f-e92c-4fb5-8cdb-08494b39d656-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.268940 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.268966 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.268994 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.269030 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/80dd398f-e92c-4fb5-8cdb-08494b39d656-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.269050 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.269064 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.269347 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.269436 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.269489 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.269546 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/80dd398f-e92c-4fb5-8cdb-08494b39d656-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.269671 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.269710 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m4w65\" (UniqueName: \"kubernetes.io/projected/80dd398f-e92c-4fb5-8cdb-08494b39d656-kube-api-access-m4w65\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.269835 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/80dd398f-e92c-4fb5-8cdb-08494b39d656-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.269941 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.270013 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.270038 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.270065 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.280392 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/80dd398f-e92c-4fb5-8cdb-08494b39d656-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.280393 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/80dd398f-e92c-4fb5-8cdb-08494b39d656-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.299931 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4w65\" (UniqueName: \"kubernetes.io/projected/80dd398f-e92c-4fb5-8cdb-08494b39d656-kube-api-access-m4w65\") pod \"service-telemetry-operator-3-build\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.401207 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:27 crc kubenswrapper[5173]: I1209 14:26:27.786939 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 09 14:26:28 crc kubenswrapper[5173]: I1209 14:26:28.438839 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 14:26:28 crc kubenswrapper[5173]: W1209 14:26:28.446468 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80dd398f_e92c_4fb5_8cdb_08494b39d656.slice/crio-419208d510b5ee2055b780e14bb0d5066c13fc985e9c52e17e9864c8cecceb4b WatchSource:0}: Error finding container 419208d510b5ee2055b780e14bb0d5066c13fc985e9c52e17e9864c8cecceb4b: Status 404 returned error can't find the container with id 419208d510b5ee2055b780e14bb0d5066c13fc985e9c52e17e9864c8cecceb4b Dec 09 14:26:28 crc kubenswrapper[5173]: I1209 14:26:28.576258 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"80dd398f-e92c-4fb5-8cdb-08494b39d656","Type":"ContainerStarted","Data":"419208d510b5ee2055b780e14bb0d5066c13fc985e9c52e17e9864c8cecceb4b"} Dec 09 14:26:29 crc kubenswrapper[5173]: I1209 14:26:29.584743 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"80dd398f-e92c-4fb5-8cdb-08494b39d656","Type":"ContainerStarted","Data":"1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e"} Dec 09 14:26:29 crc kubenswrapper[5173]: I1209 14:26:29.651089 5173 ???:1] "http: TLS handshake error from 192.168.126.11:33260: no serving certificate available for the kubelet" Dec 09 14:26:30 crc kubenswrapper[5173]: I1209 14:26:30.560623 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-vqw6b" Dec 09 14:26:30 crc kubenswrapper[5173]: I1209 14:26:30.689906 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 14:26:31 crc kubenswrapper[5173]: I1209 14:26:31.598835 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-3-build" podUID="80dd398f-e92c-4fb5-8cdb-08494b39d656" containerName="git-clone" containerID="cri-o://1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e" gracePeriod=30 Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.562689 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_80dd398f-e92c-4fb5-8cdb-08494b39d656/git-clone/0.log" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.563243 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.605034 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_80dd398f-e92c-4fb5-8cdb-08494b39d656/git-clone/0.log" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.605092 5173 generic.go:358] "Generic (PLEG): container finished" podID="80dd398f-e92c-4fb5-8cdb-08494b39d656" containerID="1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e" exitCode=1 Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.605195 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.605206 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"80dd398f-e92c-4fb5-8cdb-08494b39d656","Type":"ContainerDied","Data":"1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e"} Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.605268 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"80dd398f-e92c-4fb5-8cdb-08494b39d656","Type":"ContainerDied","Data":"419208d510b5ee2055b780e14bb0d5066c13fc985e9c52e17e9864c8cecceb4b"} Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.605289 5173 scope.go:117] "RemoveContainer" containerID="1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.631113 5173 scope.go:117] "RemoveContainer" containerID="1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e" Dec 09 14:26:32 crc kubenswrapper[5173]: E1209 14:26:32.631627 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e\": container with ID starting with 1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e not found: ID does not exist" containerID="1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.631658 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e"} err="failed to get container status \"1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e\": rpc error: code = NotFound desc = could not find container \"1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e\": container with ID starting with 1284850d8efa0abdc28c290254098bcda781dfd46983d844154737752f52622e not found: ID does not exist" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709042 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-system-configs\") pod \"80dd398f-e92c-4fb5-8cdb-08494b39d656\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709162 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-container-storage-root\") pod \"80dd398f-e92c-4fb5-8cdb-08494b39d656\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709241 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/80dd398f-e92c-4fb5-8cdb-08494b39d656-node-pullsecrets\") pod \"80dd398f-e92c-4fb5-8cdb-08494b39d656\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709328 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-blob-cache\") pod \"80dd398f-e92c-4fb5-8cdb-08494b39d656\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709390 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/80dd398f-e92c-4fb5-8cdb-08494b39d656-builder-dockercfg-qgrnl-pull\") pod \"80dd398f-e92c-4fb5-8cdb-08494b39d656\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709492 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/80dd398f-e92c-4fb5-8cdb-08494b39d656-builder-dockercfg-qgrnl-push\") pod \"80dd398f-e92c-4fb5-8cdb-08494b39d656\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709528 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4w65\" (UniqueName: \"kubernetes.io/projected/80dd398f-e92c-4fb5-8cdb-08494b39d656-kube-api-access-m4w65\") pod \"80dd398f-e92c-4fb5-8cdb-08494b39d656\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709653 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-ca-bundles\") pod \"80dd398f-e92c-4fb5-8cdb-08494b39d656\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709699 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/80dd398f-e92c-4fb5-8cdb-08494b39d656-buildcachedir\") pod \"80dd398f-e92c-4fb5-8cdb-08494b39d656\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709736 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-container-storage-run\") pod \"80dd398f-e92c-4fb5-8cdb-08494b39d656\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709725 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "80dd398f-e92c-4fb5-8cdb-08494b39d656" (UID: "80dd398f-e92c-4fb5-8cdb-08494b39d656"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709763 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-buildworkdir\") pod \"80dd398f-e92c-4fb5-8cdb-08494b39d656\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709804 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-proxy-ca-bundles\") pod \"80dd398f-e92c-4fb5-8cdb-08494b39d656\" (UID: \"80dd398f-e92c-4fb5-8cdb-08494b39d656\") " Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709858 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "80dd398f-e92c-4fb5-8cdb-08494b39d656" (UID: "80dd398f-e92c-4fb5-8cdb-08494b39d656"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.709918 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80dd398f-e92c-4fb5-8cdb-08494b39d656-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "80dd398f-e92c-4fb5-8cdb-08494b39d656" (UID: "80dd398f-e92c-4fb5-8cdb-08494b39d656"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.710275 5173 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.710304 5173 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.710321 5173 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/80dd398f-e92c-4fb5-8cdb-08494b39d656-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.710392 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80dd398f-e92c-4fb5-8cdb-08494b39d656-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "80dd398f-e92c-4fb5-8cdb-08494b39d656" (UID: "80dd398f-e92c-4fb5-8cdb-08494b39d656"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.710546 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "80dd398f-e92c-4fb5-8cdb-08494b39d656" (UID: "80dd398f-e92c-4fb5-8cdb-08494b39d656"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.710648 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "80dd398f-e92c-4fb5-8cdb-08494b39d656" (UID: "80dd398f-e92c-4fb5-8cdb-08494b39d656"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.710944 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "80dd398f-e92c-4fb5-8cdb-08494b39d656" (UID: "80dd398f-e92c-4fb5-8cdb-08494b39d656"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.710966 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "80dd398f-e92c-4fb5-8cdb-08494b39d656" (UID: "80dd398f-e92c-4fb5-8cdb-08494b39d656"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.710997 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "80dd398f-e92c-4fb5-8cdb-08494b39d656" (UID: "80dd398f-e92c-4fb5-8cdb-08494b39d656"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.716473 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dd398f-e92c-4fb5-8cdb-08494b39d656-builder-dockercfg-qgrnl-pull" (OuterVolumeSpecName: "builder-dockercfg-qgrnl-pull") pod "80dd398f-e92c-4fb5-8cdb-08494b39d656" (UID: "80dd398f-e92c-4fb5-8cdb-08494b39d656"). InnerVolumeSpecName "builder-dockercfg-qgrnl-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.716503 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dd398f-e92c-4fb5-8cdb-08494b39d656-builder-dockercfg-qgrnl-push" (OuterVolumeSpecName: "builder-dockercfg-qgrnl-push") pod "80dd398f-e92c-4fb5-8cdb-08494b39d656" (UID: "80dd398f-e92c-4fb5-8cdb-08494b39d656"). InnerVolumeSpecName "builder-dockercfg-qgrnl-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.716532 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80dd398f-e92c-4fb5-8cdb-08494b39d656-kube-api-access-m4w65" (OuterVolumeSpecName: "kube-api-access-m4w65") pod "80dd398f-e92c-4fb5-8cdb-08494b39d656" (UID: "80dd398f-e92c-4fb5-8cdb-08494b39d656"). InnerVolumeSpecName "kube-api-access-m4w65". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.812096 5173 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.812142 5173 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.812165 5173 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.812184 5173 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.812201 5173 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/80dd398f-e92c-4fb5-8cdb-08494b39d656-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.812217 5173 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/80dd398f-e92c-4fb5-8cdb-08494b39d656-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.812233 5173 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/80dd398f-e92c-4fb5-8cdb-08494b39d656-builder-dockercfg-qgrnl-pull\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.812251 5173 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/80dd398f-e92c-4fb5-8cdb-08494b39d656-builder-dockercfg-qgrnl-push\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.812267 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m4w65\" (UniqueName: \"kubernetes.io/projected/80dd398f-e92c-4fb5-8cdb-08494b39d656-kube-api-access-m4w65\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.948387 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 14:26:32 crc kubenswrapper[5173]: I1209 14:26:32.954434 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 09 14:26:33 crc kubenswrapper[5173]: I1209 14:26:33.879141 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80dd398f-e92c-4fb5-8cdb-08494b39d656" path="/var/lib/kubelet/pods/80dd398f-e92c-4fb5-8cdb-08494b39d656/volumes" Dec 09 14:26:42 crc kubenswrapper[5173]: I1209 14:26:42.780435 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 14:26:42 crc kubenswrapper[5173]: I1209 14:26:42.781516 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="80dd398f-e92c-4fb5-8cdb-08494b39d656" containerName="git-clone" Dec 09 14:26:42 crc kubenswrapper[5173]: I1209 14:26:42.781533 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dd398f-e92c-4fb5-8cdb-08494b39d656" containerName="git-clone" Dec 09 14:26:42 crc kubenswrapper[5173]: I1209 14:26:42.781653 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="80dd398f-e92c-4fb5-8cdb-08494b39d656" containerName="git-clone" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.667390 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.668246 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.670745 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-sys-config\"" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.670786 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-global-ca\"" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.670999 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-ca\"" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.671411 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-qgrnl\"" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.773290 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.773421 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/99277a92-6714-4d89-9132-809343dd13ff-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.773470 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99277a92-6714-4d89-9132-809343dd13ff-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.773557 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.773608 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kg9d\" (UniqueName: \"kubernetes.io/projected/99277a92-6714-4d89-9132-809343dd13ff-kube-api-access-2kg9d\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.773687 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.773736 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/99277a92-6714-4d89-9132-809343dd13ff-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.773765 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.773788 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.773820 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.773850 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.773873 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99277a92-6714-4d89-9132-809343dd13ff-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.874863 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2kg9d\" (UniqueName: \"kubernetes.io/projected/99277a92-6714-4d89-9132-809343dd13ff-kube-api-access-2kg9d\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.874937 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.874975 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/99277a92-6714-4d89-9132-809343dd13ff-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.875003 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.875026 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.875058 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.875086 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.875110 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99277a92-6714-4d89-9132-809343dd13ff-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.875145 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.875188 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/99277a92-6714-4d89-9132-809343dd13ff-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.875229 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99277a92-6714-4d89-9132-809343dd13ff-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.875257 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.875679 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.876769 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.876890 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.877008 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.877217 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.877500 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.877819 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99277a92-6714-4d89-9132-809343dd13ff-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.878034 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.878130 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99277a92-6714-4d89-9132-809343dd13ff-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.883665 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/99277a92-6714-4d89-9132-809343dd13ff-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.884251 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/99277a92-6714-4d89-9132-809343dd13ff-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.895751 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kg9d\" (UniqueName: \"kubernetes.io/projected/99277a92-6714-4d89-9132-809343dd13ff-kube-api-access-2kg9d\") pod \"service-telemetry-operator-4-build\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:43 crc kubenswrapper[5173]: I1209 14:26:43.987482 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:44 crc kubenswrapper[5173]: W1209 14:26:44.189813 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99277a92_6714_4d89_9132_809343dd13ff.slice/crio-370efa653785583483e558f1bebfd802efdeea1c64014305c5ad3526f63962c7 WatchSource:0}: Error finding container 370efa653785583483e558f1bebfd802efdeea1c64014305c5ad3526f63962c7: Status 404 returned error can't find the container with id 370efa653785583483e558f1bebfd802efdeea1c64014305c5ad3526f63962c7 Dec 09 14:26:44 crc kubenswrapper[5173]: I1209 14:26:44.196153 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 14:26:44 crc kubenswrapper[5173]: I1209 14:26:44.676372 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"99277a92-6714-4d89-9132-809343dd13ff","Type":"ContainerStarted","Data":"f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005"} Dec 09 14:26:44 crc kubenswrapper[5173]: I1209 14:26:44.676693 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"99277a92-6714-4d89-9132-809343dd13ff","Type":"ContainerStarted","Data":"370efa653785583483e558f1bebfd802efdeea1c64014305c5ad3526f63962c7"} Dec 09 14:26:44 crc kubenswrapper[5173]: I1209 14:26:44.718712 5173 ???:1] "http: TLS handshake error from 192.168.126.11:60316: no serving certificate available for the kubelet" Dec 09 14:26:45 crc kubenswrapper[5173]: I1209 14:26:45.748969 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 14:26:46 crc kubenswrapper[5173]: I1209 14:26:46.688683 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-4-build" podUID="99277a92-6714-4d89-9132-809343dd13ff" containerName="git-clone" containerID="cri-o://f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005" gracePeriod=30 Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.068237 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_99277a92-6714-4d89-9132-809343dd13ff/git-clone/0.log" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.068677 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225144 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-system-configs\") pod \"99277a92-6714-4d89-9132-809343dd13ff\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225196 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99277a92-6714-4d89-9132-809343dd13ff-buildcachedir\") pod \"99277a92-6714-4d89-9132-809343dd13ff\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225245 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-ca-bundles\") pod \"99277a92-6714-4d89-9132-809343dd13ff\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225307 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-proxy-ca-bundles\") pod \"99277a92-6714-4d89-9132-809343dd13ff\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225329 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-container-storage-run\") pod \"99277a92-6714-4d89-9132-809343dd13ff\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225376 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-buildworkdir\") pod \"99277a92-6714-4d89-9132-809343dd13ff\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225404 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-container-storage-root\") pod \"99277a92-6714-4d89-9132-809343dd13ff\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225469 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/99277a92-6714-4d89-9132-809343dd13ff-builder-dockercfg-qgrnl-push\") pod \"99277a92-6714-4d89-9132-809343dd13ff\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225488 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/99277a92-6714-4d89-9132-809343dd13ff-builder-dockercfg-qgrnl-pull\") pod \"99277a92-6714-4d89-9132-809343dd13ff\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225522 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-build-blob-cache\") pod \"99277a92-6714-4d89-9132-809343dd13ff\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225561 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kg9d\" (UniqueName: \"kubernetes.io/projected/99277a92-6714-4d89-9132-809343dd13ff-kube-api-access-2kg9d\") pod \"99277a92-6714-4d89-9132-809343dd13ff\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225583 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99277a92-6714-4d89-9132-809343dd13ff-node-pullsecrets\") pod \"99277a92-6714-4d89-9132-809343dd13ff\" (UID: \"99277a92-6714-4d89-9132-809343dd13ff\") " Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.225789 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99277a92-6714-4d89-9132-809343dd13ff-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "99277a92-6714-4d89-9132-809343dd13ff" (UID: "99277a92-6714-4d89-9132-809343dd13ff"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.226024 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "99277a92-6714-4d89-9132-809343dd13ff" (UID: "99277a92-6714-4d89-9132-809343dd13ff"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.226199 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99277a92-6714-4d89-9132-809343dd13ff-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "99277a92-6714-4d89-9132-809343dd13ff" (UID: "99277a92-6714-4d89-9132-809343dd13ff"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.226252 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "99277a92-6714-4d89-9132-809343dd13ff" (UID: "99277a92-6714-4d89-9132-809343dd13ff"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.226361 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "99277a92-6714-4d89-9132-809343dd13ff" (UID: "99277a92-6714-4d89-9132-809343dd13ff"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.226517 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "99277a92-6714-4d89-9132-809343dd13ff" (UID: "99277a92-6714-4d89-9132-809343dd13ff"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.226569 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "99277a92-6714-4d89-9132-809343dd13ff" (UID: "99277a92-6714-4d89-9132-809343dd13ff"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.226822 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "99277a92-6714-4d89-9132-809343dd13ff" (UID: "99277a92-6714-4d89-9132-809343dd13ff"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.226959 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "99277a92-6714-4d89-9132-809343dd13ff" (UID: "99277a92-6714-4d89-9132-809343dd13ff"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.231535 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99277a92-6714-4d89-9132-809343dd13ff-builder-dockercfg-qgrnl-push" (OuterVolumeSpecName: "builder-dockercfg-qgrnl-push") pod "99277a92-6714-4d89-9132-809343dd13ff" (UID: "99277a92-6714-4d89-9132-809343dd13ff"). InnerVolumeSpecName "builder-dockercfg-qgrnl-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.231591 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99277a92-6714-4d89-9132-809343dd13ff-kube-api-access-2kg9d" (OuterVolumeSpecName: "kube-api-access-2kg9d") pod "99277a92-6714-4d89-9132-809343dd13ff" (UID: "99277a92-6714-4d89-9132-809343dd13ff"). InnerVolumeSpecName "kube-api-access-2kg9d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.233491 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99277a92-6714-4d89-9132-809343dd13ff-builder-dockercfg-qgrnl-pull" (OuterVolumeSpecName: "builder-dockercfg-qgrnl-pull") pod "99277a92-6714-4d89-9132-809343dd13ff" (UID: "99277a92-6714-4d89-9132-809343dd13ff"). InnerVolumeSpecName "builder-dockercfg-qgrnl-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.326980 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2kg9d\" (UniqueName: \"kubernetes.io/projected/99277a92-6714-4d89-9132-809343dd13ff-kube-api-access-2kg9d\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.327017 5173 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99277a92-6714-4d89-9132-809343dd13ff-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.327029 5173 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.327041 5173 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99277a92-6714-4d89-9132-809343dd13ff-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.327053 5173 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.327065 5173 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99277a92-6714-4d89-9132-809343dd13ff-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.327079 5173 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.327090 5173 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.327102 5173 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.327114 5173 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/99277a92-6714-4d89-9132-809343dd13ff-builder-dockercfg-qgrnl-push\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.327125 5173 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/99277a92-6714-4d89-9132-809343dd13ff-builder-dockercfg-qgrnl-pull\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.327136 5173 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99277a92-6714-4d89-9132-809343dd13ff-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.697220 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_99277a92-6714-4d89-9132-809343dd13ff/git-clone/0.log" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.697880 5173 generic.go:358] "Generic (PLEG): container finished" podID="99277a92-6714-4d89-9132-809343dd13ff" containerID="f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005" exitCode=1 Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.698003 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"99277a92-6714-4d89-9132-809343dd13ff","Type":"ContainerDied","Data":"f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005"} Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.698061 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.698095 5173 scope.go:117] "RemoveContainer" containerID="f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.698077 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"99277a92-6714-4d89-9132-809343dd13ff","Type":"ContainerDied","Data":"370efa653785583483e558f1bebfd802efdeea1c64014305c5ad3526f63962c7"} Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.718700 5173 scope.go:117] "RemoveContainer" containerID="f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005" Dec 09 14:26:47 crc kubenswrapper[5173]: E1209 14:26:47.719238 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005\": container with ID starting with f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005 not found: ID does not exist" containerID="f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.719311 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005"} err="failed to get container status \"f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005\": rpc error: code = NotFound desc = could not find container \"f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005\": container with ID starting with f27b481c704993e638ae47131425dd87e36e7ff4065cde43bf3f310fa9dd1005 not found: ID does not exist" Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.758646 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.766691 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 09 14:26:47 crc kubenswrapper[5173]: I1209 14:26:47.880616 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99277a92-6714-4d89-9132-809343dd13ff" path="/var/lib/kubelet/pods/99277a92-6714-4d89-9132-809343dd13ff/volumes" Dec 09 14:26:57 crc kubenswrapper[5173]: I1209 14:26:57.200554 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 14:26:57 crc kubenswrapper[5173]: I1209 14:26:57.201849 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99277a92-6714-4d89-9132-809343dd13ff" containerName="git-clone" Dec 09 14:26:57 crc kubenswrapper[5173]: I1209 14:26:57.201866 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="99277a92-6714-4d89-9132-809343dd13ff" containerName="git-clone" Dec 09 14:26:57 crc kubenswrapper[5173]: I1209 14:26:57.202007 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="99277a92-6714-4d89-9132-809343dd13ff" containerName="git-clone" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.514322 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d24z7_a80ae74e-7470-4168-bdc1-454fa2137d7a/kube-multus/0.log" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.514487 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d24z7_a80ae74e-7470-4168-bdc1-454fa2137d7a/kube-multus/0.log" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.523634 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.523937 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.786997 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.787201 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.790770 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-global-ca\"" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.790883 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-qgrnl\"" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.790770 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-sys-config\"" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.794414 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-ca\"" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.983242 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.983528 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.983669 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6g74\" (UniqueName: \"kubernetes.io/projected/0e9655b0-2d97-4e02-96e7-9613367f2d79-kube-api-access-f6g74\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.983796 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/0e9655b0-2d97-4e02-96e7-9613367f2d79-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.983901 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.984023 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.984153 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e9655b0-2d97-4e02-96e7-9613367f2d79-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.984281 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.984444 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.984501 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/0e9655b0-2d97-4e02-96e7-9613367f2d79-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.984557 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:58 crc kubenswrapper[5173]: I1209 14:26:58.984719 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0e9655b0-2d97-4e02-96e7-9613367f2d79-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.085911 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.085989 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.086021 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f6g74\" (UniqueName: \"kubernetes.io/projected/0e9655b0-2d97-4e02-96e7-9613367f2d79-kube-api-access-f6g74\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.086283 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/0e9655b0-2d97-4e02-96e7-9613367f2d79-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.087764 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.087877 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.088005 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e9655b0-2d97-4e02-96e7-9613367f2d79-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.088108 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.088205 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.088239 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/0e9655b0-2d97-4e02-96e7-9613367f2d79-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.088304 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.088405 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.088444 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0e9655b0-2d97-4e02-96e7-9613367f2d79-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.088800 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.089023 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0e9655b0-2d97-4e02-96e7-9613367f2d79-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.089438 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.089681 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.090113 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e9655b0-2d97-4e02-96e7-9613367f2d79-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.088796 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.090475 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.090500 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.095795 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/0e9655b0-2d97-4e02-96e7-9613367f2d79-builder-dockercfg-qgrnl-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.095960 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/0e9655b0-2d97-4e02-96e7-9613367f2d79-builder-dockercfg-qgrnl-push\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.112192 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6g74\" (UniqueName: \"kubernetes.io/projected/0e9655b0-2d97-4e02-96e7-9613367f2d79-kube-api-access-f6g74\") pod \"service-telemetry-operator-5-build\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.407915 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.620376 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 14:26:59 crc kubenswrapper[5173]: I1209 14:26:59.793578 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"0e9655b0-2d97-4e02-96e7-9613367f2d79","Type":"ContainerStarted","Data":"0fb6184855d780b3943f467ebbaf2a7000df35deb392e08feaec1bb839f27ee4"} Dec 09 14:27:00 crc kubenswrapper[5173]: I1209 14:27:00.802672 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"0e9655b0-2d97-4e02-96e7-9613367f2d79","Type":"ContainerStarted","Data":"cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28"} Dec 09 14:27:00 crc kubenswrapper[5173]: I1209 14:27:00.855923 5173 ???:1] "http: TLS handshake error from 192.168.126.11:33768: no serving certificate available for the kubelet" Dec 09 14:27:01 crc kubenswrapper[5173]: I1209 14:27:01.884240 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 14:27:02 crc kubenswrapper[5173]: I1209 14:27:02.815107 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-5-build" podUID="0e9655b0-2d97-4e02-96e7-9613367f2d79" containerName="git-clone" containerID="cri-o://cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28" gracePeriod=30 Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.203972 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_0e9655b0-2d97-4e02-96e7-9613367f2d79/git-clone/0.log" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.204232 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.243715 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-container-storage-run\") pod \"0e9655b0-2d97-4e02-96e7-9613367f2d79\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.243761 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6g74\" (UniqueName: \"kubernetes.io/projected/0e9655b0-2d97-4e02-96e7-9613367f2d79-kube-api-access-f6g74\") pod \"0e9655b0-2d97-4e02-96e7-9613367f2d79\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.243791 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-blob-cache\") pod \"0e9655b0-2d97-4e02-96e7-9613367f2d79\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.243853 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-system-configs\") pod \"0e9655b0-2d97-4e02-96e7-9613367f2d79\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.243885 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-buildworkdir\") pod \"0e9655b0-2d97-4e02-96e7-9613367f2d79\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.243934 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-proxy-ca-bundles\") pod \"0e9655b0-2d97-4e02-96e7-9613367f2d79\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.243956 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e9655b0-2d97-4e02-96e7-9613367f2d79-node-pullsecrets\") pod \"0e9655b0-2d97-4e02-96e7-9613367f2d79\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.243990 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/0e9655b0-2d97-4e02-96e7-9613367f2d79-builder-dockercfg-qgrnl-pull\") pod \"0e9655b0-2d97-4e02-96e7-9613367f2d79\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244014 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-ca-bundles\") pod \"0e9655b0-2d97-4e02-96e7-9613367f2d79\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244036 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/0e9655b0-2d97-4e02-96e7-9613367f2d79-builder-dockercfg-qgrnl-push\") pod \"0e9655b0-2d97-4e02-96e7-9613367f2d79\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244098 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0e9655b0-2d97-4e02-96e7-9613367f2d79-buildcachedir\") pod \"0e9655b0-2d97-4e02-96e7-9613367f2d79\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244123 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "0e9655b0-2d97-4e02-96e7-9613367f2d79" (UID: "0e9655b0-2d97-4e02-96e7-9613367f2d79"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244150 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-container-storage-root\") pod \"0e9655b0-2d97-4e02-96e7-9613367f2d79\" (UID: \"0e9655b0-2d97-4e02-96e7-9613367f2d79\") " Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244229 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "0e9655b0-2d97-4e02-96e7-9613367f2d79" (UID: "0e9655b0-2d97-4e02-96e7-9613367f2d79"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244304 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "0e9655b0-2d97-4e02-96e7-9613367f2d79" (UID: "0e9655b0-2d97-4e02-96e7-9613367f2d79"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244323 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9655b0-2d97-4e02-96e7-9613367f2d79-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0e9655b0-2d97-4e02-96e7-9613367f2d79" (UID: "0e9655b0-2d97-4e02-96e7-9613367f2d79"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244425 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9655b0-2d97-4e02-96e7-9613367f2d79-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "0e9655b0-2d97-4e02-96e7-9613367f2d79" (UID: "0e9655b0-2d97-4e02-96e7-9613367f2d79"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244552 5173 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0e9655b0-2d97-4e02-96e7-9613367f2d79-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244569 5173 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244579 5173 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244588 5173 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244596 5173 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e9655b0-2d97-4e02-96e7-9613367f2d79-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.244617 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "0e9655b0-2d97-4e02-96e7-9613367f2d79" (UID: "0e9655b0-2d97-4e02-96e7-9613367f2d79"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.246674 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "0e9655b0-2d97-4e02-96e7-9613367f2d79" (UID: "0e9655b0-2d97-4e02-96e7-9613367f2d79"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.246813 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "0e9655b0-2d97-4e02-96e7-9613367f2d79" (UID: "0e9655b0-2d97-4e02-96e7-9613367f2d79"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.246960 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "0e9655b0-2d97-4e02-96e7-9613367f2d79" (UID: "0e9655b0-2d97-4e02-96e7-9613367f2d79"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.249861 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9655b0-2d97-4e02-96e7-9613367f2d79-kube-api-access-f6g74" (OuterVolumeSpecName: "kube-api-access-f6g74") pod "0e9655b0-2d97-4e02-96e7-9613367f2d79" (UID: "0e9655b0-2d97-4e02-96e7-9613367f2d79"). InnerVolumeSpecName "kube-api-access-f6g74". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.249993 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9655b0-2d97-4e02-96e7-9613367f2d79-builder-dockercfg-qgrnl-push" (OuterVolumeSpecName: "builder-dockercfg-qgrnl-push") pod "0e9655b0-2d97-4e02-96e7-9613367f2d79" (UID: "0e9655b0-2d97-4e02-96e7-9613367f2d79"). InnerVolumeSpecName "builder-dockercfg-qgrnl-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.250069 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9655b0-2d97-4e02-96e7-9613367f2d79-builder-dockercfg-qgrnl-pull" (OuterVolumeSpecName: "builder-dockercfg-qgrnl-pull") pod "0e9655b0-2d97-4e02-96e7-9613367f2d79" (UID: "0e9655b0-2d97-4e02-96e7-9613367f2d79"). InnerVolumeSpecName "builder-dockercfg-qgrnl-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.345478 5173 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0e9655b0-2d97-4e02-96e7-9613367f2d79-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.345514 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f6g74\" (UniqueName: \"kubernetes.io/projected/0e9655b0-2d97-4e02-96e7-9613367f2d79-kube-api-access-f6g74\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.345525 5173 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.345534 5173 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.345542 5173 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-qgrnl-pull\" (UniqueName: \"kubernetes.io/secret/0e9655b0-2d97-4e02-96e7-9613367f2d79-builder-dockercfg-qgrnl-pull\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.345551 5173 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e9655b0-2d97-4e02-96e7-9613367f2d79-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.345558 5173 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-qgrnl-push\" (UniqueName: \"kubernetes.io/secret/0e9655b0-2d97-4e02-96e7-9613367f2d79-builder-dockercfg-qgrnl-push\") on node \"crc\" DevicePath \"\"" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.821128 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_0e9655b0-2d97-4e02-96e7-9613367f2d79/git-clone/0.log" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.821520 5173 generic.go:358] "Generic (PLEG): container finished" podID="0e9655b0-2d97-4e02-96e7-9613367f2d79" containerID="cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28" exitCode=1 Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.821605 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.821666 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"0e9655b0-2d97-4e02-96e7-9613367f2d79","Type":"ContainerDied","Data":"cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28"} Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.821734 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"0e9655b0-2d97-4e02-96e7-9613367f2d79","Type":"ContainerDied","Data":"0fb6184855d780b3943f467ebbaf2a7000df35deb392e08feaec1bb839f27ee4"} Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.821759 5173 scope.go:117] "RemoveContainer" containerID="cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.847509 5173 scope.go:117] "RemoveContainer" containerID="cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28" Dec 09 14:27:03 crc kubenswrapper[5173]: E1209 14:27:03.848076 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28\": container with ID starting with cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28 not found: ID does not exist" containerID="cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.848121 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28"} err="failed to get container status \"cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28\": rpc error: code = NotFound desc = could not find container \"cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28\": container with ID starting with cdb86f87b5e05ed15825d083ed52c34b7d4ebc542d21ef92b5f6b8036d29bf28 not found: ID does not exist" Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.852469 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.856474 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 09 14:27:03 crc kubenswrapper[5173]: I1209 14:27:03.877099 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e9655b0-2d97-4e02-96e7-9613367f2d79" path="/var/lib/kubelet/pods/0e9655b0-2d97-4e02-96e7-9613367f2d79/volumes" Dec 09 14:27:47 crc kubenswrapper[5173]: I1209 14:27:47.719105 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f9zsf"] Dec 09 14:27:47 crc kubenswrapper[5173]: I1209 14:27:47.720465 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e9655b0-2d97-4e02-96e7-9613367f2d79" containerName="git-clone" Dec 09 14:27:47 crc kubenswrapper[5173]: I1209 14:27:47.720484 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9655b0-2d97-4e02-96e7-9613367f2d79" containerName="git-clone" Dec 09 14:27:47 crc kubenswrapper[5173]: I1209 14:27:47.720661 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="0e9655b0-2d97-4e02-96e7-9613367f2d79" containerName="git-clone" Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.283597 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9zsf"] Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.283772 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.413783 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27117986-c17b-4402-b5a3-768ff7063fa8-utilities\") pod \"community-operators-f9zsf\" (UID: \"27117986-c17b-4402-b5a3-768ff7063fa8\") " pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.414067 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwb99\" (UniqueName: \"kubernetes.io/projected/27117986-c17b-4402-b5a3-768ff7063fa8-kube-api-access-gwb99\") pod \"community-operators-f9zsf\" (UID: \"27117986-c17b-4402-b5a3-768ff7063fa8\") " pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.414189 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27117986-c17b-4402-b5a3-768ff7063fa8-catalog-content\") pod \"community-operators-f9zsf\" (UID: \"27117986-c17b-4402-b5a3-768ff7063fa8\") " pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.515016 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27117986-c17b-4402-b5a3-768ff7063fa8-utilities\") pod \"community-operators-f9zsf\" (UID: \"27117986-c17b-4402-b5a3-768ff7063fa8\") " pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.515292 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwb99\" (UniqueName: \"kubernetes.io/projected/27117986-c17b-4402-b5a3-768ff7063fa8-kube-api-access-gwb99\") pod \"community-operators-f9zsf\" (UID: \"27117986-c17b-4402-b5a3-768ff7063fa8\") " pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.515437 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27117986-c17b-4402-b5a3-768ff7063fa8-catalog-content\") pod \"community-operators-f9zsf\" (UID: \"27117986-c17b-4402-b5a3-768ff7063fa8\") " pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.515533 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27117986-c17b-4402-b5a3-768ff7063fa8-utilities\") pod \"community-operators-f9zsf\" (UID: \"27117986-c17b-4402-b5a3-768ff7063fa8\") " pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.515822 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27117986-c17b-4402-b5a3-768ff7063fa8-catalog-content\") pod \"community-operators-f9zsf\" (UID: \"27117986-c17b-4402-b5a3-768ff7063fa8\") " pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.535033 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwb99\" (UniqueName: \"kubernetes.io/projected/27117986-c17b-4402-b5a3-768ff7063fa8-kube-api-access-gwb99\") pod \"community-operators-f9zsf\" (UID: \"27117986-c17b-4402-b5a3-768ff7063fa8\") " pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.601564 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:48 crc kubenswrapper[5173]: I1209 14:27:48.997904 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9zsf"] Dec 09 14:27:49 crc kubenswrapper[5173]: I1209 14:27:49.093067 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9zsf" event={"ID":"27117986-c17b-4402-b5a3-768ff7063fa8","Type":"ContainerStarted","Data":"14c1af90125e44b7060e1cfeda7effd031d5b20015fea313494250d7f0b1468b"} Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.077749 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nf4mr/must-gather-lqnw5"] Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.105099 5173 generic.go:358] "Generic (PLEG): container finished" podID="27117986-c17b-4402-b5a3-768ff7063fa8" containerID="336812c1c76b91def466d02f95d910ac8606447da8bdb62ccb479823b2b926e1" exitCode=0 Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.245894 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nf4mr/must-gather-lqnw5"] Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.246288 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9zsf" event={"ID":"27117986-c17b-4402-b5a3-768ff7063fa8","Type":"ContainerDied","Data":"336812c1c76b91def466d02f95d910ac8606447da8bdb62ccb479823b2b926e1"} Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.245963 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nf4mr/must-gather-lqnw5" Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.246741 5173 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.247959 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-nf4mr\"/\"default-dockercfg-tnbrw\"" Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.248531 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-nf4mr\"/\"kube-root-ca.crt\"" Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.250337 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-nf4mr\"/\"openshift-service-ca.crt\"" Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.337415 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d9535925-e4f7-40f5-b541-0deb128ae13a-must-gather-output\") pod \"must-gather-lqnw5\" (UID: \"d9535925-e4f7-40f5-b541-0deb128ae13a\") " pod="openshift-must-gather-nf4mr/must-gather-lqnw5" Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.337737 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kppw8\" (UniqueName: \"kubernetes.io/projected/d9535925-e4f7-40f5-b541-0deb128ae13a-kube-api-access-kppw8\") pod \"must-gather-lqnw5\" (UID: \"d9535925-e4f7-40f5-b541-0deb128ae13a\") " pod="openshift-must-gather-nf4mr/must-gather-lqnw5" Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.439477 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d9535925-e4f7-40f5-b541-0deb128ae13a-must-gather-output\") pod \"must-gather-lqnw5\" (UID: \"d9535925-e4f7-40f5-b541-0deb128ae13a\") " pod="openshift-must-gather-nf4mr/must-gather-lqnw5" Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.439568 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kppw8\" (UniqueName: \"kubernetes.io/projected/d9535925-e4f7-40f5-b541-0deb128ae13a-kube-api-access-kppw8\") pod \"must-gather-lqnw5\" (UID: \"d9535925-e4f7-40f5-b541-0deb128ae13a\") " pod="openshift-must-gather-nf4mr/must-gather-lqnw5" Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.439976 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d9535925-e4f7-40f5-b541-0deb128ae13a-must-gather-output\") pod \"must-gather-lqnw5\" (UID: \"d9535925-e4f7-40f5-b541-0deb128ae13a\") " pod="openshift-must-gather-nf4mr/must-gather-lqnw5" Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.461037 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kppw8\" (UniqueName: \"kubernetes.io/projected/d9535925-e4f7-40f5-b541-0deb128ae13a-kube-api-access-kppw8\") pod \"must-gather-lqnw5\" (UID: \"d9535925-e4f7-40f5-b541-0deb128ae13a\") " pod="openshift-must-gather-nf4mr/must-gather-lqnw5" Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.567104 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nf4mr/must-gather-lqnw5" Dec 09 14:27:50 crc kubenswrapper[5173]: I1209 14:27:50.739496 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nf4mr/must-gather-lqnw5"] Dec 09 14:27:50 crc kubenswrapper[5173]: W1209 14:27:50.745601 5173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9535925_e4f7_40f5_b541_0deb128ae13a.slice/crio-28fd1b5c47bb2f1a18bcb09c00c002365d87915c5be16b53b3e14ef952dfc872 WatchSource:0}: Error finding container 28fd1b5c47bb2f1a18bcb09c00c002365d87915c5be16b53b3e14ef952dfc872: Status 404 returned error can't find the container with id 28fd1b5c47bb2f1a18bcb09c00c002365d87915c5be16b53b3e14ef952dfc872 Dec 09 14:27:51 crc kubenswrapper[5173]: I1209 14:27:51.111428 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nf4mr/must-gather-lqnw5" event={"ID":"d9535925-e4f7-40f5-b541-0deb128ae13a","Type":"ContainerStarted","Data":"28fd1b5c47bb2f1a18bcb09c00c002365d87915c5be16b53b3e14ef952dfc872"} Dec 09 14:27:53 crc kubenswrapper[5173]: I1209 14:27:53.143257 5173 generic.go:358] "Generic (PLEG): container finished" podID="27117986-c17b-4402-b5a3-768ff7063fa8" containerID="6fa6718662229dbb67f566609d70ef5a894dc9bf49eb2888e3d28e9c4991f656" exitCode=0 Dec 09 14:27:53 crc kubenswrapper[5173]: I1209 14:27:53.143403 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9zsf" event={"ID":"27117986-c17b-4402-b5a3-768ff7063fa8","Type":"ContainerDied","Data":"6fa6718662229dbb67f566609d70ef5a894dc9bf49eb2888e3d28e9c4991f656"} Dec 09 14:27:54 crc kubenswrapper[5173]: I1209 14:27:54.150192 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9zsf" event={"ID":"27117986-c17b-4402-b5a3-768ff7063fa8","Type":"ContainerStarted","Data":"baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410"} Dec 09 14:27:54 crc kubenswrapper[5173]: I1209 14:27:54.169741 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f9zsf" podStartSLOduration=5.384492001 podStartE2EDuration="7.169722408s" podCreationTimestamp="2025-12-09 14:27:47 +0000 UTC" firstStartedPulling="2025-12-09 14:27:50.24692127 +0000 UTC m=+953.172203517" lastFinishedPulling="2025-12-09 14:27:52.032151657 +0000 UTC m=+954.957433924" observedRunningTime="2025-12-09 14:27:54.168017895 +0000 UTC m=+957.093300162" watchObservedRunningTime="2025-12-09 14:27:54.169722408 +0000 UTC m=+957.095004655" Dec 09 14:27:58 crc kubenswrapper[5173]: I1209 14:27:58.679215 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:58 crc kubenswrapper[5173]: I1209 14:27:58.682977 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:58 crc kubenswrapper[5173]: I1209 14:27:58.749658 5173 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:59 crc kubenswrapper[5173]: I1209 14:27:59.757997 5173 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:27:59 crc kubenswrapper[5173]: I1209 14:27:59.798485 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9zsf"] Dec 09 14:28:00 crc kubenswrapper[5173]: I1209 14:28:00.722002 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nf4mr/must-gather-lqnw5" event={"ID":"d9535925-e4f7-40f5-b541-0deb128ae13a","Type":"ContainerStarted","Data":"e87376d8962057ccc6ccdbaf277ccf35d0fcea1101710bbc2e8c3d1bcd475070"} Dec 09 14:28:00 crc kubenswrapper[5173]: I1209 14:28:00.722372 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nf4mr/must-gather-lqnw5" event={"ID":"d9535925-e4f7-40f5-b541-0deb128ae13a","Type":"ContainerStarted","Data":"709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7"} Dec 09 14:28:00 crc kubenswrapper[5173]: I1209 14:28:00.737246 5173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nf4mr/must-gather-lqnw5" podStartSLOduration=1.903734211 podStartE2EDuration="10.737229175s" podCreationTimestamp="2025-12-09 14:27:50 +0000 UTC" firstStartedPulling="2025-12-09 14:27:50.747660942 +0000 UTC m=+953.672943189" lastFinishedPulling="2025-12-09 14:27:59.581155906 +0000 UTC m=+962.506438153" observedRunningTime="2025-12-09 14:28:00.735599334 +0000 UTC m=+963.660881611" watchObservedRunningTime="2025-12-09 14:28:00.737229175 +0000 UTC m=+963.662511422" Dec 09 14:28:01 crc kubenswrapper[5173]: I1209 14:28:01.729816 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f9zsf" podUID="27117986-c17b-4402-b5a3-768ff7063fa8" containerName="registry-server" containerID="cri-o://baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410" gracePeriod=2 Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.097929 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.131655 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27117986-c17b-4402-b5a3-768ff7063fa8-utilities\") pod \"27117986-c17b-4402-b5a3-768ff7063fa8\" (UID: \"27117986-c17b-4402-b5a3-768ff7063fa8\") " Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.131741 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwb99\" (UniqueName: \"kubernetes.io/projected/27117986-c17b-4402-b5a3-768ff7063fa8-kube-api-access-gwb99\") pod \"27117986-c17b-4402-b5a3-768ff7063fa8\" (UID: \"27117986-c17b-4402-b5a3-768ff7063fa8\") " Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.131927 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27117986-c17b-4402-b5a3-768ff7063fa8-catalog-content\") pod \"27117986-c17b-4402-b5a3-768ff7063fa8\" (UID: \"27117986-c17b-4402-b5a3-768ff7063fa8\") " Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.132996 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27117986-c17b-4402-b5a3-768ff7063fa8-utilities" (OuterVolumeSpecName: "utilities") pod "27117986-c17b-4402-b5a3-768ff7063fa8" (UID: "27117986-c17b-4402-b5a3-768ff7063fa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.137872 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27117986-c17b-4402-b5a3-768ff7063fa8-kube-api-access-gwb99" (OuterVolumeSpecName: "kube-api-access-gwb99") pod "27117986-c17b-4402-b5a3-768ff7063fa8" (UID: "27117986-c17b-4402-b5a3-768ff7063fa8"). InnerVolumeSpecName "kube-api-access-gwb99". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.189258 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27117986-c17b-4402-b5a3-768ff7063fa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27117986-c17b-4402-b5a3-768ff7063fa8" (UID: "27117986-c17b-4402-b5a3-768ff7063fa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.234329 5173 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27117986-c17b-4402-b5a3-768ff7063fa8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.234432 5173 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27117986-c17b-4402-b5a3-768ff7063fa8-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.234442 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gwb99\" (UniqueName: \"kubernetes.io/projected/27117986-c17b-4402-b5a3-768ff7063fa8-kube-api-access-gwb99\") on node \"crc\" DevicePath \"\"" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.740004 5173 generic.go:358] "Generic (PLEG): container finished" podID="27117986-c17b-4402-b5a3-768ff7063fa8" containerID="baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410" exitCode=0 Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.740217 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9zsf" event={"ID":"27117986-c17b-4402-b5a3-768ff7063fa8","Type":"ContainerDied","Data":"baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410"} Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.740297 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9zsf" event={"ID":"27117986-c17b-4402-b5a3-768ff7063fa8","Type":"ContainerDied","Data":"14c1af90125e44b7060e1cfeda7effd031d5b20015fea313494250d7f0b1468b"} Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.740325 5173 scope.go:117] "RemoveContainer" containerID="baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.792956 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9zsf" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.812246 5173 scope.go:117] "RemoveContainer" containerID="6fa6718662229dbb67f566609d70ef5a894dc9bf49eb2888e3d28e9c4991f656" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.823960 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9zsf"] Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.828444 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f9zsf"] Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.846168 5173 scope.go:117] "RemoveContainer" containerID="336812c1c76b91def466d02f95d910ac8606447da8bdb62ccb479823b2b926e1" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.859815 5173 scope.go:117] "RemoveContainer" containerID="baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410" Dec 09 14:28:02 crc kubenswrapper[5173]: E1209 14:28:02.860155 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410\": container with ID starting with baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410 not found: ID does not exist" containerID="baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.860202 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410"} err="failed to get container status \"baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410\": rpc error: code = NotFound desc = could not find container \"baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410\": container with ID starting with baef702fe43edb11ad0aa70d549fe5758cc5260b935168ae13c5471752deb410 not found: ID does not exist" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.860225 5173 scope.go:117] "RemoveContainer" containerID="6fa6718662229dbb67f566609d70ef5a894dc9bf49eb2888e3d28e9c4991f656" Dec 09 14:28:02 crc kubenswrapper[5173]: E1209 14:28:02.860567 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fa6718662229dbb67f566609d70ef5a894dc9bf49eb2888e3d28e9c4991f656\": container with ID starting with 6fa6718662229dbb67f566609d70ef5a894dc9bf49eb2888e3d28e9c4991f656 not found: ID does not exist" containerID="6fa6718662229dbb67f566609d70ef5a894dc9bf49eb2888e3d28e9c4991f656" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.860615 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa6718662229dbb67f566609d70ef5a894dc9bf49eb2888e3d28e9c4991f656"} err="failed to get container status \"6fa6718662229dbb67f566609d70ef5a894dc9bf49eb2888e3d28e9c4991f656\": rpc error: code = NotFound desc = could not find container \"6fa6718662229dbb67f566609d70ef5a894dc9bf49eb2888e3d28e9c4991f656\": container with ID starting with 6fa6718662229dbb67f566609d70ef5a894dc9bf49eb2888e3d28e9c4991f656 not found: ID does not exist" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.860650 5173 scope.go:117] "RemoveContainer" containerID="336812c1c76b91def466d02f95d910ac8606447da8bdb62ccb479823b2b926e1" Dec 09 14:28:02 crc kubenswrapper[5173]: E1209 14:28:02.860908 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"336812c1c76b91def466d02f95d910ac8606447da8bdb62ccb479823b2b926e1\": container with ID starting with 336812c1c76b91def466d02f95d910ac8606447da8bdb62ccb479823b2b926e1 not found: ID does not exist" containerID="336812c1c76b91def466d02f95d910ac8606447da8bdb62ccb479823b2b926e1" Dec 09 14:28:02 crc kubenswrapper[5173]: I1209 14:28:02.860927 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"336812c1c76b91def466d02f95d910ac8606447da8bdb62ccb479823b2b926e1"} err="failed to get container status \"336812c1c76b91def466d02f95d910ac8606447da8bdb62ccb479823b2b926e1\": rpc error: code = NotFound desc = could not find container \"336812c1c76b91def466d02f95d910ac8606447da8bdb62ccb479823b2b926e1\": container with ID starting with 336812c1c76b91def466d02f95d910ac8606447da8bdb62ccb479823b2b926e1 not found: ID does not exist" Dec 09 14:28:03 crc kubenswrapper[5173]: I1209 14:28:03.880721 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27117986-c17b-4402-b5a3-768ff7063fa8" path="/var/lib/kubelet/pods/27117986-c17b-4402-b5a3-768ff7063fa8/volumes" Dec 09 14:28:10 crc kubenswrapper[5173]: I1209 14:28:10.074645 5173 ???:1] "http: TLS handshake error from 192.168.126.11:37994: no serving certificate available for the kubelet" Dec 09 14:28:19 crc kubenswrapper[5173]: I1209 14:28:19.085669 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:28:19 crc kubenswrapper[5173]: I1209 14:28:19.087247 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:28:36 crc kubenswrapper[5173]: I1209 14:28:36.972321 5173 ???:1] "http: TLS handshake error from 192.168.126.11:44092: no serving certificate available for the kubelet" Dec 09 14:28:37 crc kubenswrapper[5173]: I1209 14:28:37.097503 5173 ???:1] "http: TLS handshake error from 192.168.126.11:44108: no serving certificate available for the kubelet" Dec 09 14:28:37 crc kubenswrapper[5173]: I1209 14:28:37.142659 5173 ???:1] "http: TLS handshake error from 192.168.126.11:44122: no serving certificate available for the kubelet" Dec 09 14:28:39 crc kubenswrapper[5173]: E1209 14:28:39.727436 5173 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 09 14:28:41 crc kubenswrapper[5173]: I1209 14:28:41.810465 5173 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 09 14:28:41 crc kubenswrapper[5173]: I1209 14:28:41.822305 5173 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 09 14:28:41 crc kubenswrapper[5173]: I1209 14:28:41.840917 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40820: no serving certificate available for the kubelet" Dec 09 14:28:41 crc kubenswrapper[5173]: I1209 14:28:41.870970 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40826: no serving certificate available for the kubelet" Dec 09 14:28:41 crc kubenswrapper[5173]: I1209 14:28:41.910159 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40834: no serving certificate available for the kubelet" Dec 09 14:28:41 crc kubenswrapper[5173]: I1209 14:28:41.956736 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40838: no serving certificate available for the kubelet" Dec 09 14:28:42 crc kubenswrapper[5173]: I1209 14:28:42.023066 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40850: no serving certificate available for the kubelet" Dec 09 14:28:42 crc kubenswrapper[5173]: I1209 14:28:42.125032 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40852: no serving certificate available for the kubelet" Dec 09 14:28:42 crc kubenswrapper[5173]: I1209 14:28:42.316170 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40858: no serving certificate available for the kubelet" Dec 09 14:28:42 crc kubenswrapper[5173]: I1209 14:28:42.663791 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40864: no serving certificate available for the kubelet" Dec 09 14:28:43 crc kubenswrapper[5173]: I1209 14:28:43.328947 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40868: no serving certificate available for the kubelet" Dec 09 14:28:44 crc kubenswrapper[5173]: I1209 14:28:44.631203 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40882: no serving certificate available for the kubelet" Dec 09 14:28:47 crc kubenswrapper[5173]: I1209 14:28:47.216703 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40896: no serving certificate available for the kubelet" Dec 09 14:28:47 crc kubenswrapper[5173]: I1209 14:28:47.565366 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40904: no serving certificate available for the kubelet" Dec 09 14:28:47 crc kubenswrapper[5173]: I1209 14:28:47.707084 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40920: no serving certificate available for the kubelet" Dec 09 14:28:47 crc kubenswrapper[5173]: I1209 14:28:47.724314 5173 ???:1] "http: TLS handshake error from 192.168.126.11:40922: no serving certificate available for the kubelet" Dec 09 14:28:49 crc kubenswrapper[5173]: I1209 14:28:49.084989 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:28:49 crc kubenswrapper[5173]: I1209 14:28:49.085129 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:28:52 crc kubenswrapper[5173]: I1209 14:28:52.365836 5173 ???:1] "http: TLS handshake error from 192.168.126.11:44852: no serving certificate available for the kubelet" Dec 09 14:29:02 crc kubenswrapper[5173]: I1209 14:29:02.546734 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49446: no serving certificate available for the kubelet" Dec 09 14:29:02 crc kubenswrapper[5173]: I1209 14:29:02.634054 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49460: no serving certificate available for the kubelet" Dec 09 14:29:02 crc kubenswrapper[5173]: I1209 14:29:02.686924 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49470: no serving certificate available for the kubelet" Dec 09 14:29:02 crc kubenswrapper[5173]: I1209 14:29:02.722669 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49474: no serving certificate available for the kubelet" Dec 09 14:29:02 crc kubenswrapper[5173]: I1209 14:29:02.731521 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49482: no serving certificate available for the kubelet" Dec 09 14:29:02 crc kubenswrapper[5173]: I1209 14:29:02.852033 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49490: no serving certificate available for the kubelet" Dec 09 14:29:02 crc kubenswrapper[5173]: I1209 14:29:02.858566 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49492: no serving certificate available for the kubelet" Dec 09 14:29:02 crc kubenswrapper[5173]: I1209 14:29:02.891194 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49496: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.032837 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49512: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.143979 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49514: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.194189 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49524: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.199893 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49526: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.326045 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49534: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.349299 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49538: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.353924 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49546: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.499320 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49550: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.642283 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49558: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.643292 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49570: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.654522 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49574: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.827708 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49580: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.838076 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49586: no serving certificate available for the kubelet" Dec 09 14:29:03 crc kubenswrapper[5173]: I1209 14:29:03.842565 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49588: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.007489 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49598: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.140559 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49600: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.165456 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49604: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.171297 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49620: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.353762 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49622: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.357484 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49624: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.374901 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49632: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.512829 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49648: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.663333 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49658: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.664782 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49670: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.696382 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49674: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.824840 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49690: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.827037 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49692: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.847203 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49706: no serving certificate available for the kubelet" Dec 09 14:29:04 crc kubenswrapper[5173]: I1209 14:29:04.989803 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49718: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.111396 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49724: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.146429 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49734: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.147092 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49738: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.310198 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49746: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.328783 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49752: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.329242 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49756: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.352048 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49772: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.461794 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49782: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.617230 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49784: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.619436 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49788: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.654533 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49800: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.789649 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49812: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.791934 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49814: no serving certificate available for the kubelet" Dec 09 14:29:05 crc kubenswrapper[5173]: I1209 14:29:05.793223 5173 ???:1] "http: TLS handshake error from 192.168.126.11:49822: no serving certificate available for the kubelet" Dec 09 14:29:16 crc kubenswrapper[5173]: I1209 14:29:16.384214 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54056: no serving certificate available for the kubelet" Dec 09 14:29:16 crc kubenswrapper[5173]: I1209 14:29:16.562718 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54060: no serving certificate available for the kubelet" Dec 09 14:29:16 crc kubenswrapper[5173]: I1209 14:29:16.584016 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54064: no serving certificate available for the kubelet" Dec 09 14:29:16 crc kubenswrapper[5173]: I1209 14:29:16.726627 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54080: no serving certificate available for the kubelet" Dec 09 14:29:16 crc kubenswrapper[5173]: I1209 14:29:16.793566 5173 ???:1] "http: TLS handshake error from 192.168.126.11:54084: no serving certificate available for the kubelet" Dec 09 14:29:19 crc kubenswrapper[5173]: I1209 14:29:19.085090 5173 patch_prober.go:28] interesting pod/machine-config-daemon-pxfmg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 14:29:19 crc kubenswrapper[5173]: I1209 14:29:19.085596 5173 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 14:29:19 crc kubenswrapper[5173]: I1209 14:29:19.085657 5173 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" Dec 09 14:29:19 crc kubenswrapper[5173]: I1209 14:29:19.086332 5173 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec21fae24ed5b475fc335cf81728357994814b2a3f96c37e355ce8993f76f7cf"} pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 14:29:19 crc kubenswrapper[5173]: I1209 14:29:19.086411 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" podUID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerName="machine-config-daemon" containerID="cri-o://ec21fae24ed5b475fc335cf81728357994814b2a3f96c37e355ce8993f76f7cf" gracePeriod=600 Dec 09 14:29:19 crc kubenswrapper[5173]: I1209 14:29:19.316590 5173 generic.go:358] "Generic (PLEG): container finished" podID="8a8dd347-8a1b-4551-a318-abe7c12df817" containerID="ec21fae24ed5b475fc335cf81728357994814b2a3f96c37e355ce8993f76f7cf" exitCode=0 Dec 09 14:29:19 crc kubenswrapper[5173]: I1209 14:29:19.316744 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerDied","Data":"ec21fae24ed5b475fc335cf81728357994814b2a3f96c37e355ce8993f76f7cf"} Dec 09 14:29:19 crc kubenswrapper[5173]: I1209 14:29:19.316946 5173 scope.go:117] "RemoveContainer" containerID="859cb3132f564d2a8f9a55f99e30a3d865a9afbcb1dbb53a0523762f86be0540" Dec 09 14:29:20 crc kubenswrapper[5173]: I1209 14:29:20.326272 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pxfmg" event={"ID":"8a8dd347-8a1b-4551-a318-abe7c12df817","Type":"ContainerStarted","Data":"aad25cd53778de413b40f0c576d76d71a142e982ae468dd9007765f98e7a8c12"} Dec 09 14:29:23 crc kubenswrapper[5173]: I1209 14:29:23.135867 5173 ???:1] "http: TLS handshake error from 192.168.126.11:52134: no serving certificate available for the kubelet" Dec 09 14:29:54 crc kubenswrapper[5173]: I1209 14:29:54.556521 5173 generic.go:358] "Generic (PLEG): container finished" podID="d9535925-e4f7-40f5-b541-0deb128ae13a" containerID="709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7" exitCode=0 Dec 09 14:29:54 crc kubenswrapper[5173]: I1209 14:29:54.556622 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nf4mr/must-gather-lqnw5" event={"ID":"d9535925-e4f7-40f5-b541-0deb128ae13a","Type":"ContainerDied","Data":"709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7"} Dec 09 14:29:54 crc kubenswrapper[5173]: I1209 14:29:54.557898 5173 scope.go:117] "RemoveContainer" containerID="709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.091988 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35320: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.157574 5173 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh"] Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.158256 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27117986-c17b-4402-b5a3-768ff7063fa8" containerName="registry-server" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.158271 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="27117986-c17b-4402-b5a3-768ff7063fa8" containerName="registry-server" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.158288 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27117986-c17b-4402-b5a3-768ff7063fa8" containerName="extract-content" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.158296 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="27117986-c17b-4402-b5a3-768ff7063fa8" containerName="extract-content" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.158310 5173 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27117986-c17b-4402-b5a3-768ff7063fa8" containerName="extract-utilities" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.158315 5173 state_mem.go:107] "Deleted CPUSet assignment" podUID="27117986-c17b-4402-b5a3-768ff7063fa8" containerName="extract-utilities" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.158436 5173 memory_manager.go:356] "RemoveStaleState removing state" podUID="27117986-c17b-4402-b5a3-768ff7063fa8" containerName="registry-server" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.172252 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh"] Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.172375 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.178579 5173 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.178584 5173 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.185601 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmjfm\" (UniqueName: \"kubernetes.io/projected/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-kube-api-access-dmjfm\") pod \"collect-profiles-29421510-kjhsh\" (UID: \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.185668 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-config-volume\") pod \"collect-profiles-29421510-kjhsh\" (UID: \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.185734 5173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-secret-volume\") pod \"collect-profiles-29421510-kjhsh\" (UID: \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.255434 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35336: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.266788 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35342: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.286915 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dmjfm\" (UniqueName: \"kubernetes.io/projected/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-kube-api-access-dmjfm\") pod \"collect-profiles-29421510-kjhsh\" (UID: \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.286968 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-config-volume\") pod \"collect-profiles-29421510-kjhsh\" (UID: \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.287011 5173 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-secret-volume\") pod \"collect-profiles-29421510-kjhsh\" (UID: \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.288237 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-config-volume\") pod \"collect-profiles-29421510-kjhsh\" (UID: \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.292060 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35348: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.295001 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-secret-volume\") pod \"collect-profiles-29421510-kjhsh\" (UID: \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.303485 5173 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmjfm\" (UniqueName: \"kubernetes.io/projected/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-kube-api-access-dmjfm\") pod \"collect-profiles-29421510-kjhsh\" (UID: \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.303541 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35356: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.317698 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35372: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.329421 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35388: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.344551 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35398: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.356768 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35404: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.492791 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35420: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.499624 5173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.503848 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35422: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.530784 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35430: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.543912 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35440: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.560128 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35454: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.572447 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35456: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.588184 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35464: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.600294 5173 ???:1] "http: TLS handshake error from 192.168.126.11:35476: no serving certificate available for the kubelet" Dec 09 14:30:00 crc kubenswrapper[5173]: I1209 14:30:00.698151 5173 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh"] Dec 09 14:30:01 crc kubenswrapper[5173]: I1209 14:30:01.291612 5173 generic.go:358] "Generic (PLEG): container finished" podID="3d5e34c2-a6d7-43fc-9000-9db0a76930fe" containerID="80e6ee7694b1bd78c9c09331e7ed31f332ea554fe751931feeb5561da1ce73e4" exitCode=0 Dec 09 14:30:01 crc kubenswrapper[5173]: I1209 14:30:01.291664 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" event={"ID":"3d5e34c2-a6d7-43fc-9000-9db0a76930fe","Type":"ContainerDied","Data":"80e6ee7694b1bd78c9c09331e7ed31f332ea554fe751931feeb5561da1ce73e4"} Dec 09 14:30:01 crc kubenswrapper[5173]: I1209 14:30:01.292056 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" event={"ID":"3d5e34c2-a6d7-43fc-9000-9db0a76930fe","Type":"ContainerStarted","Data":"5f8d209071be33006f41ca7bbe3e94a18d7f3190786793a50740378cde9c7d14"} Dec 09 14:30:02 crc kubenswrapper[5173]: I1209 14:30:02.538645 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:02 crc kubenswrapper[5173]: I1209 14:30:02.610726 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-config-volume\") pod \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\" (UID: \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\") " Dec 09 14:30:02 crc kubenswrapper[5173]: I1209 14:30:02.610836 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmjfm\" (UniqueName: \"kubernetes.io/projected/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-kube-api-access-dmjfm\") pod \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\" (UID: \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\") " Dec 09 14:30:02 crc kubenswrapper[5173]: I1209 14:30:02.610936 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-secret-volume\") pod \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\" (UID: \"3d5e34c2-a6d7-43fc-9000-9db0a76930fe\") " Dec 09 14:30:02 crc kubenswrapper[5173]: I1209 14:30:02.611876 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-config-volume" (OuterVolumeSpecName: "config-volume") pod "3d5e34c2-a6d7-43fc-9000-9db0a76930fe" (UID: "3d5e34c2-a6d7-43fc-9000-9db0a76930fe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 09 14:30:02 crc kubenswrapper[5173]: I1209 14:30:02.618171 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-kube-api-access-dmjfm" (OuterVolumeSpecName: "kube-api-access-dmjfm") pod "3d5e34c2-a6d7-43fc-9000-9db0a76930fe" (UID: "3d5e34c2-a6d7-43fc-9000-9db0a76930fe"). InnerVolumeSpecName "kube-api-access-dmjfm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:30:02 crc kubenswrapper[5173]: I1209 14:30:02.618188 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3d5e34c2-a6d7-43fc-9000-9db0a76930fe" (UID: "3d5e34c2-a6d7-43fc-9000-9db0a76930fe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 09 14:30:02 crc kubenswrapper[5173]: I1209 14:30:02.712395 5173 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:02 crc kubenswrapper[5173]: I1209 14:30:02.712447 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmjfm\" (UniqueName: \"kubernetes.io/projected/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-kube-api-access-dmjfm\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:02 crc kubenswrapper[5173]: I1209 14:30:02.712486 5173 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d5e34c2-a6d7-43fc-9000-9db0a76930fe-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:03 crc kubenswrapper[5173]: I1209 14:30:03.306827 5173 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" event={"ID":"3d5e34c2-a6d7-43fc-9000-9db0a76930fe","Type":"ContainerDied","Data":"5f8d209071be33006f41ca7bbe3e94a18d7f3190786793a50740378cde9c7d14"} Dec 09 14:30:03 crc kubenswrapper[5173]: I1209 14:30:03.306872 5173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f8d209071be33006f41ca7bbe3e94a18d7f3190786793a50740378cde9c7d14" Dec 09 14:30:03 crc kubenswrapper[5173]: I1209 14:30:03.306874 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421510-kjhsh" Dec 09 14:30:04 crc kubenswrapper[5173]: I1209 14:30:04.117405 5173 ???:1] "http: TLS handshake error from 192.168.126.11:60170: no serving certificate available for the kubelet" Dec 09 14:30:05 crc kubenswrapper[5173]: I1209 14:30:05.630515 5173 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nf4mr/must-gather-lqnw5"] Dec 09 14:30:05 crc kubenswrapper[5173]: I1209 14:30:05.630767 5173 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-nf4mr/must-gather-lqnw5" podUID="d9535925-e4f7-40f5-b541-0deb128ae13a" containerName="copy" containerID="cri-o://e87376d8962057ccc6ccdbaf277ccf35d0fcea1101710bbc2e8c3d1bcd475070" gracePeriod=2 Dec 09 14:30:05 crc kubenswrapper[5173]: I1209 14:30:05.632931 5173 status_manager.go:895] "Failed to get status for pod" podUID="d9535925-e4f7-40f5-b541-0deb128ae13a" pod="openshift-must-gather-nf4mr/must-gather-lqnw5" err="pods \"must-gather-lqnw5\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-nf4mr\": no relationship found between node 'crc' and this object" Dec 09 14:30:05 crc kubenswrapper[5173]: I1209 14:30:05.634918 5173 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nf4mr/must-gather-lqnw5"] Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.003338 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nf4mr_must-gather-lqnw5_d9535925-e4f7-40f5-b541-0deb128ae13a/copy/0.log" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.003921 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nf4mr/must-gather-lqnw5" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.051970 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d9535925-e4f7-40f5-b541-0deb128ae13a-must-gather-output\") pod \"d9535925-e4f7-40f5-b541-0deb128ae13a\" (UID: \"d9535925-e4f7-40f5-b541-0deb128ae13a\") " Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.052211 5173 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kppw8\" (UniqueName: \"kubernetes.io/projected/d9535925-e4f7-40f5-b541-0deb128ae13a-kube-api-access-kppw8\") pod \"d9535925-e4f7-40f5-b541-0deb128ae13a\" (UID: \"d9535925-e4f7-40f5-b541-0deb128ae13a\") " Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.061725 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9535925-e4f7-40f5-b541-0deb128ae13a-kube-api-access-kppw8" (OuterVolumeSpecName: "kube-api-access-kppw8") pod "d9535925-e4f7-40f5-b541-0deb128ae13a" (UID: "d9535925-e4f7-40f5-b541-0deb128ae13a"). InnerVolumeSpecName "kube-api-access-kppw8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.101668 5173 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9535925-e4f7-40f5-b541-0deb128ae13a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d9535925-e4f7-40f5-b541-0deb128ae13a" (UID: "d9535925-e4f7-40f5-b541-0deb128ae13a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.153396 5173 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kppw8\" (UniqueName: \"kubernetes.io/projected/d9535925-e4f7-40f5-b541-0deb128ae13a-kube-api-access-kppw8\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.153436 5173 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d9535925-e4f7-40f5-b541-0deb128ae13a-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.357869 5173 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nf4mr_must-gather-lqnw5_d9535925-e4f7-40f5-b541-0deb128ae13a/copy/0.log" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.358527 5173 generic.go:358] "Generic (PLEG): container finished" podID="d9535925-e4f7-40f5-b541-0deb128ae13a" containerID="e87376d8962057ccc6ccdbaf277ccf35d0fcea1101710bbc2e8c3d1bcd475070" exitCode=143 Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.358575 5173 scope.go:117] "RemoveContainer" containerID="e87376d8962057ccc6ccdbaf277ccf35d0fcea1101710bbc2e8c3d1bcd475070" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.358602 5173 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nf4mr/must-gather-lqnw5" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.385496 5173 scope.go:117] "RemoveContainer" containerID="709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.450135 5173 scope.go:117] "RemoveContainer" containerID="e87376d8962057ccc6ccdbaf277ccf35d0fcea1101710bbc2e8c3d1bcd475070" Dec 09 14:30:06 crc kubenswrapper[5173]: E1209 14:30:06.450621 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87376d8962057ccc6ccdbaf277ccf35d0fcea1101710bbc2e8c3d1bcd475070\": container with ID starting with e87376d8962057ccc6ccdbaf277ccf35d0fcea1101710bbc2e8c3d1bcd475070 not found: ID does not exist" containerID="e87376d8962057ccc6ccdbaf277ccf35d0fcea1101710bbc2e8c3d1bcd475070" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.450677 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87376d8962057ccc6ccdbaf277ccf35d0fcea1101710bbc2e8c3d1bcd475070"} err="failed to get container status \"e87376d8962057ccc6ccdbaf277ccf35d0fcea1101710bbc2e8c3d1bcd475070\": rpc error: code = NotFound desc = could not find container \"e87376d8962057ccc6ccdbaf277ccf35d0fcea1101710bbc2e8c3d1bcd475070\": container with ID starting with e87376d8962057ccc6ccdbaf277ccf35d0fcea1101710bbc2e8c3d1bcd475070 not found: ID does not exist" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.450703 5173 scope.go:117] "RemoveContainer" containerID="709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7" Dec 09 14:30:06 crc kubenswrapper[5173]: E1209 14:30:06.451057 5173 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7\": container with ID starting with 709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7 not found: ID does not exist" containerID="709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7" Dec 09 14:30:06 crc kubenswrapper[5173]: I1209 14:30:06.451088 5173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7"} err="failed to get container status \"709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7\": rpc error: code = NotFound desc = could not find container \"709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7\": container with ID starting with 709bb589d3eeb8ffe04ced62c3d930a2886209de03fbcb68314bd641706e3ca7 not found: ID does not exist" Dec 09 14:30:07 crc kubenswrapper[5173]: I1209 14:30:07.879135 5173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9535925-e4f7-40f5-b541-0deb128ae13a" path="/var/lib/kubelet/pods/d9535925-e4f7-40f5-b541-0deb128ae13a/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515116031261024441 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015116031262017357 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015116026453016510 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015116026453015460 5ustar corecore