Jan 22 14:15:19 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 14:15:20 crc kubenswrapper[5110]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 14:15:20 crc kubenswrapper[5110]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 14:15:20 crc kubenswrapper[5110]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 14:15:20 crc kubenswrapper[5110]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 14:15:20 crc kubenswrapper[5110]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 14:15:20 crc kubenswrapper[5110]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.026944 5110 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030416 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030453 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030458 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030465 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030469 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030473 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030478 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030484 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030487 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030491 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030495 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030500 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030504 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030510 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030515 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030518 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030522 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030525 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030528 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030532 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030535 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030539 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030542 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030546 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030549 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030553 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030556 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030559 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030563 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030567 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030570 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030573 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030577 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030581 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030584 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030588 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030593 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030599 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030603 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030608 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030612 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030618 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030636 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030640 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030643 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030653 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030657 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030661 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030665 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030669 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030673 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030677 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030680 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030683 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030686 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030689 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030693 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030696 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030699 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030702 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030706 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030709 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030712 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030716 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030722 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030727 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030732 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030735 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030739 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030742 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030745 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030749 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030752 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030755 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030758 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030761 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030764 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030769 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030772 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030775 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030778 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030782 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030786 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030789 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030792 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.030795 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031273 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031279 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031282 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031286 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031289 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031292 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031296 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031299 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031304 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031308 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031311 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031316 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031321 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031325 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031329 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031333 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031337 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031342 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031346 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031350 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031354 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031358 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031362 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031369 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031373 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031377 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031381 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031386 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031390 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031394 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031398 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031402 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031407 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031411 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031415 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031419 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031423 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031428 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031432 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031435 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031439 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031442 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031445 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031448 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031452 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031456 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031459 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031463 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031466 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031470 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031473 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031476 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031480 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031483 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031486 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031491 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031494 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031497 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031501 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031504 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031507 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031510 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031513 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031517 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031520 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031523 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031527 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031530 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031534 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031537 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031540 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031543 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031547 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031550 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031553 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031557 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031560 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031563 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031568 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031572 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031575 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031578 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031581 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031584 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031587 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.031590 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031714 5110 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031727 5110 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031736 5110 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031742 5110 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031748 5110 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031752 5110 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031758 5110 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031763 5110 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031767 5110 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031771 5110 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031776 5110 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031780 5110 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031785 5110 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031789 5110 flags.go:64] FLAG: --cgroup-root="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031793 5110 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031803 5110 flags.go:64] FLAG: --client-ca-file="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031806 5110 flags.go:64] FLAG: --cloud-config="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031811 5110 flags.go:64] FLAG: --cloud-provider="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031814 5110 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031820 5110 flags.go:64] FLAG: --cluster-domain="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031823 5110 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031827 5110 flags.go:64] FLAG: --config-dir="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031831 5110 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031836 5110 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031841 5110 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031847 5110 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031851 5110 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031855 5110 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031860 5110 flags.go:64] FLAG: --contention-profiling="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031864 5110 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031868 5110 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031872 5110 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031876 5110 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031886 5110 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031890 5110 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031894 5110 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031898 5110 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031902 5110 flags.go:64] FLAG: --enable-server="true" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031906 5110 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031917 5110 flags.go:64] FLAG: --event-burst="100" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031921 5110 flags.go:64] FLAG: --event-qps="50" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031925 5110 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031929 5110 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031934 5110 flags.go:64] FLAG: --eviction-hard="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031939 5110 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031942 5110 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031946 5110 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031952 5110 flags.go:64] FLAG: --eviction-soft="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031955 5110 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031959 5110 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031963 5110 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031966 5110 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031970 5110 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031973 5110 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031977 5110 flags.go:64] FLAG: --feature-gates="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031987 5110 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031990 5110 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031994 5110 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.031999 5110 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032003 5110 flags.go:64] FLAG: --healthz-port="10248" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032006 5110 flags.go:64] FLAG: --help="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032010 5110 flags.go:64] FLAG: --hostname-override="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032014 5110 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032017 5110 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032022 5110 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032027 5110 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032031 5110 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032035 5110 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032038 5110 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032042 5110 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032046 5110 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032050 5110 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032054 5110 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032058 5110 flags.go:64] FLAG: --kube-reserved="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032062 5110 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032065 5110 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032069 5110 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032073 5110 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032076 5110 flags.go:64] FLAG: --lock-file="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032082 5110 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032085 5110 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032089 5110 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032095 5110 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032099 5110 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032102 5110 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032106 5110 flags.go:64] FLAG: --logging-format="text" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032110 5110 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032114 5110 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032118 5110 flags.go:64] FLAG: --manifest-url="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032121 5110 flags.go:64] FLAG: --manifest-url-header="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032127 5110 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032131 5110 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032136 5110 flags.go:64] FLAG: --max-pods="110" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032140 5110 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032144 5110 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032148 5110 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032152 5110 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032157 5110 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032161 5110 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032165 5110 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032175 5110 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032179 5110 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032183 5110 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032187 5110 flags.go:64] FLAG: --pod-cidr="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032190 5110 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032198 5110 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032201 5110 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032205 5110 flags.go:64] FLAG: --pods-per-core="0" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032209 5110 flags.go:64] FLAG: --port="10250" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032213 5110 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032216 5110 flags.go:64] FLAG: --provider-id="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032221 5110 flags.go:64] FLAG: --qos-reserved="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032225 5110 flags.go:64] FLAG: --read-only-port="10255" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032229 5110 flags.go:64] FLAG: --register-node="true" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032232 5110 flags.go:64] FLAG: --register-schedulable="true" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032236 5110 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032242 5110 flags.go:64] FLAG: --registry-burst="10" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032246 5110 flags.go:64] FLAG: --registry-qps="5" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032250 5110 flags.go:64] FLAG: --reserved-cpus="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032254 5110 flags.go:64] FLAG: --reserved-memory="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032259 5110 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032263 5110 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032266 5110 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032270 5110 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032277 5110 flags.go:64] FLAG: --runonce="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032281 5110 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032285 5110 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032289 5110 flags.go:64] FLAG: --seccomp-default="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032293 5110 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032297 5110 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032302 5110 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032306 5110 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032310 5110 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032314 5110 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032319 5110 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032323 5110 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032328 5110 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032333 5110 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032338 5110 flags.go:64] FLAG: --system-cgroups="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032342 5110 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032350 5110 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032354 5110 flags.go:64] FLAG: --tls-cert-file="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032359 5110 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032368 5110 flags.go:64] FLAG: --tls-min-version="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032373 5110 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032377 5110 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032382 5110 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032387 5110 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032392 5110 flags.go:64] FLAG: --v="2" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032399 5110 flags.go:64] FLAG: --version="false" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032405 5110 flags.go:64] FLAG: --vmodule="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032411 5110 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.032416 5110 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032545 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032551 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032555 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032559 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032571 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032575 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032578 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032582 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032585 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032591 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032595 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032599 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032603 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032607 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032610 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032614 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032621 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032637 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032640 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032645 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032648 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032652 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032656 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032659 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032662 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032665 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032669 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032672 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032675 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032679 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032682 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032685 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032688 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032692 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032696 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032700 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032704 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032714 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032718 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032721 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032724 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032729 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032732 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032736 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032739 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032742 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032745 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032749 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032752 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032755 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032758 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032762 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032765 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032768 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032772 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032775 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032778 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032781 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032784 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032788 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032791 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032794 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032797 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032800 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032804 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032807 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032810 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032813 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032816 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032819 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032828 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032832 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032835 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032839 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032843 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032846 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032849 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032852 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032856 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032859 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032862 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032866 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032869 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032872 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032875 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.032878 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.033070 5110 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.046757 5110 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.046827 5110 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.046941 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.046954 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.046962 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.046995 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047002 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047009 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047015 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047022 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047028 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047034 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047041 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047047 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047053 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047086 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047092 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047098 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047104 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047114 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047126 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047133 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047140 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047174 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047180 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047189 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047198 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047205 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047212 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047219 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047226 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047259 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047267 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047276 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047282 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047289 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047295 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047301 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047307 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047313 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047319 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047326 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047332 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047337 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047371 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047378 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047385 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047391 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047397 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047404 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047410 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047416 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047422 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047428 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047435 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047441 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047447 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047453 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047459 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047464 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047470 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047478 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047484 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047514 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047522 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047529 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047536 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047543 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047549 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047555 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047561 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047568 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047574 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047580 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047586 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047592 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047599 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047605 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047611 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047617 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047641 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047647 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047654 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047660 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047667 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047672 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047678 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047685 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.047697 5110 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047936 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047952 5110 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047962 5110 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047969 5110 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047976 5110 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047983 5110 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047990 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.047997 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048004 5110 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048010 5110 feature_gate.go:328] unrecognized feature gate: Example Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048018 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048025 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048031 5110 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048037 5110 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048043 5110 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048050 5110 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048056 5110 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048062 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048068 5110 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048074 5110 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048080 5110 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048087 5110 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048093 5110 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048099 5110 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048132 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048140 5110 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048146 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048153 5110 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048159 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048165 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048171 5110 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048178 5110 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048184 5110 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048190 5110 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048196 5110 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048202 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048208 5110 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048217 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048223 5110 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048229 5110 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048235 5110 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048241 5110 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048247 5110 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048255 5110 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048261 5110 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048268 5110 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048274 5110 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048280 5110 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048286 5110 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048293 5110 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048299 5110 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048305 5110 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048311 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048317 5110 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048323 5110 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048330 5110 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048336 5110 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048342 5110 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048349 5110 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048356 5110 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048362 5110 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048369 5110 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048376 5110 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048382 5110 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048388 5110 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048394 5110 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048401 5110 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048407 5110 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048413 5110 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048420 5110 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048426 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048433 5110 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048439 5110 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048446 5110 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048452 5110 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048458 5110 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048466 5110 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048473 5110 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048479 5110 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048485 5110 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048491 5110 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048498 5110 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048504 5110 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048512 5110 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048521 5110 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.048528 5110 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.048540 5110 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.051984 5110 server.go:962] "Client rotation is on, will bootstrap in background" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.054787 5110 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.059982 5110 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.060169 5110 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.061223 5110 server.go:1019] "Starting client certificate rotation" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.061434 5110 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.061575 5110 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.077565 5110 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.079466 5110 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.079591 5110 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.095825 5110 log.go:25] "Validated CRI v1 runtime API" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.112738 5110 log.go:25] "Validated CRI v1 image API" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.114729 5110 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.118111 5110 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-22-14-08-07-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.118189 5110 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.139888 5110 manager.go:217] Machine: {Timestamp:2026-01-22 14:15:20.137395481 +0000 UTC m=+0.359479880 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:c7f66b8f-fb8d-43bb-91c4-80fc1b273d77 BootID:852a491e-9e7b-4f26-a7f5-3ca241db6d4a Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a5:82:f5 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a5:82:f5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d4:a5:4a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:89:5b:b2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:66:ba:be Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b5:10:db Speed:-1 Mtu:1496} {Name:eth10 MacAddress:8e:1f:68:c2:ee:87 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a2:33:a1:24:1b:b1 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.140250 5110 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.140549 5110 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.141941 5110 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.142001 5110 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.144792 5110 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.144808 5110 container_manager_linux.go:306] "Creating device plugin manager" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.144831 5110 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.145956 5110 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.146391 5110 state_mem.go:36] "Initialized new in-memory state store" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.146685 5110 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.147590 5110 kubelet.go:491] "Attempting to sync node with API server" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.147643 5110 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.147677 5110 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.147692 5110 kubelet.go:397] "Adding apiserver pod source" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.147705 5110 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.149712 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.149807 5110 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.149824 5110 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.149882 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.151218 5110 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.151241 5110 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.152735 5110 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.153034 5110 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.153687 5110 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.154311 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.154354 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.154373 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.154391 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.154408 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.154436 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.154454 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.154470 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.154489 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.154517 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.154567 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.154713 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.162755 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.162796 5110 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.163594 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.17:6443: connect: connection refused Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.179893 5110 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.180017 5110 server.go:1295] "Started kubelet" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.180316 5110 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.180411 5110 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.180485 5110 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.181639 5110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.17:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d132ec28d75ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.179946988 +0000 UTC m=+0.402031347,LastTimestamp:2026-01-22 14:15:20.179946988 +0000 UTC m=+0.402031347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.182358 5110 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 14:15:20 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.185003 5110 server.go:317] "Adding debug handlers to kubelet server" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.185049 5110 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.185422 5110 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.185458 5110 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.185534 5110 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.188502 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.188663 5110 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.189713 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.191294 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" interval="200ms" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.192936 5110 factory.go:55] Registering systemd factory Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.193017 5110 factory.go:223] Registration of the systemd container factory successfully Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.193868 5110 factory.go:153] Registering CRI-O factory Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.193914 5110 factory.go:223] Registration of the crio container factory successfully Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.193993 5110 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.194021 5110 factory.go:103] Registering Raw factory Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.194038 5110 manager.go:1196] Started watching for new ooms in manager Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.195380 5110 manager.go:319] Starting recovery of all containers Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.202376 5110 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210695 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210760 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210771 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210780 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210789 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210797 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210806 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210815 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210826 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210838 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210848 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210856 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210865 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210873 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210886 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210897 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210908 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210919 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210931 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210942 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210952 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210964 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210975 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210986 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.210998 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211009 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211020 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211055 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211070 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211081 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211105 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211116 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211127 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211138 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211148 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211162 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211174 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211185 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211198 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211210 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211222 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211246 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211257 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211268 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211279 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211289 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211300 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211314 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211327 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211342 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211357 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211371 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211383 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211395 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211406 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211421 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211442 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211665 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211681 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.211697 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212791 5110 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212820 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212836 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212851 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212880 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212894 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212906 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212921 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212945 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212957 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212968 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212980 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.212991 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213001 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213010 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213018 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213028 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213036 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213044 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213056 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213065 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213074 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213083 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213092 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213100 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213109 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213122 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213131 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213141 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213152 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213164 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213175 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213185 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213196 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213205 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213251 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213261 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213272 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213282 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213292 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213303 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213348 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213358 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213370 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213381 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213391 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213401 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213411 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213420 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213429 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213438 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213449 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213461 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213481 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213491 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213500 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213510 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213519 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213527 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213535 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213546 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213555 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213566 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213575 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213584 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213593 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213601 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213610 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213622 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213647 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213655 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213665 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213673 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213684 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213692 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213702 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213711 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213719 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213728 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213737 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213746 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213755 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213766 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213775 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213784 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213792 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213801 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213808 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213816 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213824 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213834 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213842 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213852 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213861 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213873 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213883 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213891 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213901 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213913 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213923 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213932 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213941 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213951 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213961 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213969 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213980 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213990 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.213999 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214010 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214020 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214029 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214040 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214049 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214059 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214069 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214081 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214090 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214102 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214111 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214121 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214130 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214140 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214150 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214161 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214171 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214181 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214192 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214204 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214213 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214224 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214234 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214245 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214254 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214263 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214272 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214280 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214289 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214298 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214310 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214319 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214328 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214336 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214345 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214354 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214365 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214374 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214384 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214395 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214403 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214412 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214420 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214429 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214439 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214448 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214457 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214466 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214477 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214486 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214501 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214511 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214521 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214532 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214542 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214551 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214560 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214617 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214699 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214709 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214719 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214730 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214742 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214754 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214763 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214773 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214783 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214792 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214801 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214864 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214875 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214884 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214894 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214903 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214915 5110 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214924 5110 reconstruct.go:97] "Volume reconstruction finished" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.214932 5110 reconciler.go:26] "Reconciler: start to sync state" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.230586 5110 manager.go:324] Recovery completed Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.252317 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.258670 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.258714 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.258726 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.264079 5110 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.264099 5110 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.264123 5110 state_mem.go:36] "Initialized new in-memory state store" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.268299 5110 policy_none.go:49] "None policy: Start" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.268328 5110 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.268343 5110 state_mem.go:35] "Initializing new in-memory state store" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.271838 5110 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.271927 5110 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.271970 5110 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.271984 5110 kubelet.go:2451] "Starting kubelet main sync loop" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.272187 5110 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.273014 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.289020 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.319020 5110 manager.go:341] "Starting Device Plugin manager" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.319354 5110 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.319384 5110 server.go:85] "Starting device plugin registration server" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.320094 5110 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.320121 5110 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.320277 5110 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.320359 5110 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.320365 5110 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.326489 5110 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.326542 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.372718 5110 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.372902 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.374044 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.374116 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.374131 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.375223 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.375318 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.375362 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.376018 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.376048 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.376059 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.376082 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.376098 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.376107 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.376589 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.376721 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.376756 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.377118 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.377165 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.377179 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.377284 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.377343 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.377352 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.378100 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.378187 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.378224 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.378713 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.378731 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.378748 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.378763 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.378750 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.378863 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.379451 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.379565 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.379599 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.380092 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.380120 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.380131 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.380568 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.380637 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.380653 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.381625 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.381669 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.382186 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.382216 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.382229 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.392948 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" interval="400ms" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.410322 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.418500 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.418533 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.418550 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.418747 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.418783 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.418811 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.418834 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.418855 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.418909 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.418980 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419231 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419263 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419012 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419364 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419389 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419413 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419416 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419434 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419490 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419572 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419708 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419742 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419765 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419936 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.419962 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.420054 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.420075 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.420096 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.420286 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.420332 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.420432 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.421127 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.421169 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.421183 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.421208 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.421591 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.17:6443: connect: connection refused" node="crc" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.436852 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.453749 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.477085 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.484250 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.522087 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.522815 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.522968 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523048 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523062 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523189 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523319 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523397 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523436 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523466 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523494 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523567 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523569 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523643 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523680 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523697 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523707 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523741 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523794 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523833 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523856 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523881 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523800 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523914 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523925 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523959 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.524021 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.524054 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.524103 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.524140 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.523281 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.525528 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.622041 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.623227 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.623288 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.623307 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.623343 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.624036 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.17:6443: connect: connection refused" node="crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.711270 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.731141 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-03ffcbad90dbb4cf763f9a766826e87bc624b992133219b213c6d2b142d65fa0 WatchSource:0}: Error finding container 03ffcbad90dbb4cf763f9a766826e87bc624b992133219b213c6d2b142d65fa0: Status 404 returned error can't find the container with id 03ffcbad90dbb4cf763f9a766826e87bc624b992133219b213c6d2b142d65fa0 Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.734931 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.737319 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.754342 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.760791 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-f68d32b2b223e9280735ecde389c9d6e4bd15bb9c40f2555afe6e6e7bc07cd0b WatchSource:0}: Error finding container f68d32b2b223e9280735ecde389c9d6e4bd15bb9c40f2555afe6e6e7bc07cd0b: Status 404 returned error can't find the container with id f68d32b2b223e9280735ecde389c9d6e4bd15bb9c40f2555afe6e6e7bc07cd0b Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.778206 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: I0122 14:15:20.785437 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:15:20 crc kubenswrapper[5110]: E0122 14:15:20.794250 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" interval="800ms" Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.799986 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-4af51934c73b3ce29422a9f0222ceefec461491f0ed05a23156a74b8aa2603fb WatchSource:0}: Error finding container 4af51934c73b3ce29422a9f0222ceefec461491f0ed05a23156a74b8aa2603fb: Status 404 returned error can't find the container with id 4af51934c73b3ce29422a9f0222ceefec461491f0ed05a23156a74b8aa2603fb Jan 22 14:15:20 crc kubenswrapper[5110]: W0122 14:15:20.808277 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-ff2b6c20f9f3e298d0ac25de8ff50846cbd30b04754ba3f468912849639f86c4 WatchSource:0}: Error finding container ff2b6c20f9f3e298d0ac25de8ff50846cbd30b04754ba3f468912849639f86c4: Status 404 returned error can't find the container with id ff2b6c20f9f3e298d0ac25de8ff50846cbd30b04754ba3f468912849639f86c4 Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.024864 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.026586 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.026670 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.026692 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.026718 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:15:21 crc kubenswrapper[5110]: E0122 14:15:21.027198 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.17:6443: connect: connection refused" node="crc" Jan 22 14:15:21 crc kubenswrapper[5110]: E0122 14:15:21.117434 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.164889 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.17:6443: connect: connection refused Jan 22 14:15:21 crc kubenswrapper[5110]: E0122 14:15:21.181195 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 14:15:21 crc kubenswrapper[5110]: E0122 14:15:21.271280 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.277703 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4d58e05af32372135da721ce73766f6ea10f366c5c1b13a318803ff65725c882"} Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.277761 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4af51934c73b3ce29422a9f0222ceefec461491f0ed05a23156a74b8aa2603fb"} Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.278765 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096"} Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.278788 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"23e0363a5d4963585485fa926103d7501f4dcbe93a4818919eafb9d7f7f94f22"} Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.278916 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.279425 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.279474 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.279485 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:21 crc kubenswrapper[5110]: E0122 14:15:21.279687 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.280460 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b8ec4f1e07794a08179e3f4d19be5c925b4f89bf9f1189111c866e26ac045614"} Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.280484 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"f68d32b2b223e9280735ecde389c9d6e4bd15bb9c40f2555afe6e6e7bc07cd0b"} Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.280568 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.280940 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.280959 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.280968 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:21 crc kubenswrapper[5110]: E0122 14:15:21.281090 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.282320 5110 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="744f292bdaf2a8b5cac207b241d045be99230cd860073c8e7b37c609136d2fcd" exitCode=0 Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.282386 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"744f292bdaf2a8b5cac207b241d045be99230cd860073c8e7b37c609136d2fcd"} Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.282402 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"03ffcbad90dbb4cf763f9a766826e87bc624b992133219b213c6d2b142d65fa0"} Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.282452 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.283522 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.283571 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.283589 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:21 crc kubenswrapper[5110]: E0122 14:15:21.283832 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.284961 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"ee03fe75796c5268a84b792e8f46e78c28e13dacdceac5b4f4d8c783fb2f789e"} Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.285005 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"ff2b6c20f9f3e298d0ac25de8ff50846cbd30b04754ba3f468912849639f86c4"} Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.285129 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.285750 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.285778 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.285788 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:21 crc kubenswrapper[5110]: E0122 14:15:21.286817 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:21 crc kubenswrapper[5110]: E0122 14:15:21.596009 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" interval="1.6s" Jan 22 14:15:21 crc kubenswrapper[5110]: E0122 14:15:21.725811 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.827561 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.829569 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.829600 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.829611 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:21 crc kubenswrapper[5110]: I0122 14:15:21.829707 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:15:21 crc kubenswrapper[5110]: E0122 14:15:21.830120 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.17:6443: connect: connection refused" node="crc" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.165033 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.17:6443: connect: connection refused Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.252437 5110 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 14:15:22 crc kubenswrapper[5110]: E0122 14:15:22.253296 5110 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.290473 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096" exitCode=0 Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.290555 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096"} Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.290740 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.291614 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.291703 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.291719 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:22 crc kubenswrapper[5110]: E0122 14:15:22.292082 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.292898 5110 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="b8ec4f1e07794a08179e3f4d19be5c925b4f89bf9f1189111c866e26ac045614" exitCode=0 Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.293033 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"b8ec4f1e07794a08179e3f4d19be5c925b4f89bf9f1189111c866e26ac045614"} Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.293283 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.294061 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.294087 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.294098 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.294128 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:22 crc kubenswrapper[5110]: E0122 14:15:22.294310 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.299964 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.300007 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.300021 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.301835 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"0576c46d1c23047e597f77ca52369f79c30edccc5f819d2efbf1c4389bc97657"} Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.302072 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.303084 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.303130 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.303141 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:22 crc kubenswrapper[5110]: E0122 14:15:22.303384 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.305493 5110 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="ee03fe75796c5268a84b792e8f46e78c28e13dacdceac5b4f4d8c783fb2f789e" exitCode=0 Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.305654 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"ee03fe75796c5268a84b792e8f46e78c28e13dacdceac5b4f4d8c783fb2f789e"} Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.305721 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"b9de386700dff95c5e03c1805a61ba1df0277684f7dcfc3f037b12d88e6fd06d"} Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.305744 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6257cd42c0aabd74cf1b4fb090dcc7f6042eff0cceb65fcfce3017475607d322"} Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.305758 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"aaeea57cf421b20f0f956e67318a63fe34714ed959f1c539527cef6d4da220bc"} Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.305975 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.311186 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.311235 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.311256 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:22 crc kubenswrapper[5110]: E0122 14:15:22.311695 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.316107 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"df1ef784b1eb41885d2c53e0b4af0912047f8558bc39a160c44da724182b8997"} Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.316175 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"2b3c4fd9d2080630f69cecbc801ceda2e4e82ce9e46e4c1e77cb0e704ff57d63"} Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.316193 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d1459b9c41b7d6069ce5b8517276b497b11aff1bc62b8073f3a370905ad2e3a2"} Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.316367 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.317050 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.317111 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:22 crc kubenswrapper[5110]: I0122 14:15:22.317126 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:22 crc kubenswrapper[5110]: E0122 14:15:22.317359 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.164759 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.165162 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.17:6443: connect: connection refused Jan 22 14:15:23 crc kubenswrapper[5110]: E0122 14:15:23.197104 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" interval="3.2s" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.325448 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c"} Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.325819 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f"} Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.325834 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7"} Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.325845 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572"} Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.326693 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.327798 5110 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="5ea2ef2acbf2852d0bf69defad220c1eddd4f00f226569fe9ae074a5dadb8c80" exitCode=0 Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.327832 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"5ea2ef2acbf2852d0bf69defad220c1eddd4f00f226569fe9ae074a5dadb8c80"} Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.328032 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.328061 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.328061 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.328753 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.328783 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.328786 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.328813 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.328827 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.328794 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.328813 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.328983 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.328993 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:23 crc kubenswrapper[5110]: E0122 14:15:23.329210 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:23 crc kubenswrapper[5110]: E0122 14:15:23.329434 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:23 crc kubenswrapper[5110]: E0122 14:15:23.329646 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.430342 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.437342 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.437393 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.437405 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:23 crc kubenswrapper[5110]: I0122 14:15:23.437434 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.340402 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d80b5d8b8dbf89a5c7baed724d422f54f1f119f44be0e204f1a8237a5022ffbd"} Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.340715 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.342212 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.342325 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.342346 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:24 crc kubenswrapper[5110]: E0122 14:15:24.342770 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.347693 5110 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="91be0b9f5661930cd4f206e761c2313265d51ba016a5729ff1331f6f6a4d894e" exitCode=0 Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.347760 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"91be0b9f5661930cd4f206e761c2313265d51ba016a5729ff1331f6f6a4d894e"} Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.347967 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.348145 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.348918 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.348963 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.348982 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:24 crc kubenswrapper[5110]: E0122 14:15:24.349433 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.349544 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.349661 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.349742 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:24 crc kubenswrapper[5110]: E0122 14:15:24.350177 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.515913 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:24 crc kubenswrapper[5110]: I0122 14:15:24.989446 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.359840 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d96e4d28cb0a0172ec7bca9bb0b8abd60d4408d5162fc68cb60aa76ee29be139"} Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.359902 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4218965e480ae0724ffb2ed0046da5c13adc9775781cd6655e4eee11b991bcbf"} Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.359920 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1cd451579a994467439d569d9193e371a2f195c28e0cf59bf1cea771afb93f94"} Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.359934 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"78889d8bd043c44ad144a0c7886373ae1c52c095a91f6c24a6e2f1482d735c5c"} Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.360110 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.360199 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.361197 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.361257 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.361272 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:25 crc kubenswrapper[5110]: E0122 14:15:25.361715 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.895312 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.895533 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.896765 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.896887 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:25 crc kubenswrapper[5110]: I0122 14:15:25.896913 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:25 crc kubenswrapper[5110]: E0122 14:15:25.897553 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:26 crc kubenswrapper[5110]: I0122 14:15:26.164581 5110 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 22 14:15:26 crc kubenswrapper[5110]: I0122 14:15:26.164756 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 22 14:15:26 crc kubenswrapper[5110]: I0122 14:15:26.370557 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"f7c91988ce5d51e9bfdfbb8f42af44508dc734056eba57a8e902f3ef0fa9700b"} Jan 22 14:15:26 crc kubenswrapper[5110]: I0122 14:15:26.370777 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:26 crc kubenswrapper[5110]: I0122 14:15:26.370847 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:26 crc kubenswrapper[5110]: I0122 14:15:26.371844 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:26 crc kubenswrapper[5110]: I0122 14:15:26.371898 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:26 crc kubenswrapper[5110]: I0122 14:15:26.371919 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:26 crc kubenswrapper[5110]: I0122 14:15:26.372049 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:26 crc kubenswrapper[5110]: I0122 14:15:26.372094 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:26 crc kubenswrapper[5110]: I0122 14:15:26.372108 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:26 crc kubenswrapper[5110]: E0122 14:15:26.372275 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:26 crc kubenswrapper[5110]: E0122 14:15:26.372539 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:26 crc kubenswrapper[5110]: I0122 14:15:26.619496 5110 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.199384 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.200014 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.201519 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.201582 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.201596 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:27 crc kubenswrapper[5110]: E0122 14:15:27.201981 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.207513 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.374658 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.374861 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.376042 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.376118 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.376134 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.376159 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.376206 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:27 crc kubenswrapper[5110]: I0122 14:15:27.376229 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:27 crc kubenswrapper[5110]: E0122 14:15:27.377253 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:27 crc kubenswrapper[5110]: E0122 14:15:27.377820 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:30 crc kubenswrapper[5110]: I0122 14:15:30.293284 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:30 crc kubenswrapper[5110]: I0122 14:15:30.293553 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:30 crc kubenswrapper[5110]: I0122 14:15:30.294864 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:30 crc kubenswrapper[5110]: I0122 14:15:30.294975 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:30 crc kubenswrapper[5110]: I0122 14:15:30.295044 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:30 crc kubenswrapper[5110]: E0122 14:15:30.295391 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:30 crc kubenswrapper[5110]: E0122 14:15:30.326732 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:15:30 crc kubenswrapper[5110]: I0122 14:15:30.409532 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 22 14:15:30 crc kubenswrapper[5110]: I0122 14:15:30.410075 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:30 crc kubenswrapper[5110]: I0122 14:15:30.411418 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:30 crc kubenswrapper[5110]: I0122 14:15:30.411487 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:30 crc kubenswrapper[5110]: I0122 14:15:30.411506 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:30 crc kubenswrapper[5110]: E0122 14:15:30.412309 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:33 crc kubenswrapper[5110]: E0122 14:15:33.438414 5110 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 22 14:15:33 crc kubenswrapper[5110]: I0122 14:15:33.734243 5110 trace.go:236] Trace[1738330310]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 14:15:23.732) (total time: 10001ms): Jan 22 14:15:33 crc kubenswrapper[5110]: Trace[1738330310]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:15:33.734) Jan 22 14:15:33 crc kubenswrapper[5110]: Trace[1738330310]: [10.001268808s] [10.001268808s] END Jan 22 14:15:33 crc kubenswrapper[5110]: E0122 14:15:33.734293 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 14:15:33 crc kubenswrapper[5110]: I0122 14:15:33.995692 5110 trace.go:236] Trace[2004402638]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 14:15:23.994) (total time: 10001ms): Jan 22 14:15:33 crc kubenswrapper[5110]: Trace[2004402638]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:15:33.995) Jan 22 14:15:33 crc kubenswrapper[5110]: Trace[2004402638]: [10.001510533s] [10.001510533s] END Jan 22 14:15:33 crc kubenswrapper[5110]: E0122 14:15:33.995762 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 14:15:34 crc kubenswrapper[5110]: I0122 14:15:34.007646 5110 trace.go:236] Trace[718524770]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 14:15:24.006) (total time: 10001ms): Jan 22 14:15:34 crc kubenswrapper[5110]: Trace[718524770]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:15:34.007) Jan 22 14:15:34 crc kubenswrapper[5110]: Trace[718524770]: [10.001472443s] [10.001472443s] END Jan 22 14:15:34 crc kubenswrapper[5110]: E0122 14:15:34.007688 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 14:15:34 crc kubenswrapper[5110]: I0122 14:15:34.165893 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 22 14:15:34 crc kubenswrapper[5110]: I0122 14:15:34.516450 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 14:15:34 crc kubenswrapper[5110]: I0122 14:15:34.516581 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 14:15:34 crc kubenswrapper[5110]: I0122 14:15:34.620402 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 14:15:34 crc kubenswrapper[5110]: I0122 14:15:34.620520 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 14:15:35 crc kubenswrapper[5110]: I0122 14:15:35.473162 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 14:15:35 crc kubenswrapper[5110]: I0122 14:15:35.473590 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:35 crc kubenswrapper[5110]: I0122 14:15:35.474760 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:35 crc kubenswrapper[5110]: I0122 14:15:35.474826 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:35 crc kubenswrapper[5110]: I0122 14:15:35.474853 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:35 crc kubenswrapper[5110]: E0122 14:15:35.475733 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:35 crc kubenswrapper[5110]: I0122 14:15:35.520208 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 14:15:36 crc kubenswrapper[5110]: I0122 14:15:36.166375 5110 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 22 14:15:36 crc kubenswrapper[5110]: I0122 14:15:36.166446 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 22 14:15:36 crc kubenswrapper[5110]: E0122 14:15:36.397490 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 22 14:15:36 crc kubenswrapper[5110]: I0122 14:15:36.399603 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:36 crc kubenswrapper[5110]: I0122 14:15:36.400459 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:36 crc kubenswrapper[5110]: I0122 14:15:36.400539 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:36 crc kubenswrapper[5110]: I0122 14:15:36.400558 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:36 crc kubenswrapper[5110]: E0122 14:15:36.401282 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:36 crc kubenswrapper[5110]: I0122 14:15:36.412869 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 14:15:36 crc kubenswrapper[5110]: I0122 14:15:36.638911 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:36 crc kubenswrapper[5110]: I0122 14:15:36.639995 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:36 crc kubenswrapper[5110]: I0122 14:15:36.640053 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:36 crc kubenswrapper[5110]: I0122 14:15:36.640071 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:36 crc kubenswrapper[5110]: I0122 14:15:36.640104 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:15:36 crc kubenswrapper[5110]: E0122 14:15:36.649472 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:15:36 crc kubenswrapper[5110]: E0122 14:15:36.995607 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 14:15:37 crc kubenswrapper[5110]: I0122 14:15:37.402692 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:37 crc kubenswrapper[5110]: I0122 14:15:37.403433 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:37 crc kubenswrapper[5110]: I0122 14:15:37.403474 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:37 crc kubenswrapper[5110]: I0122 14:15:37.403487 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:37 crc kubenswrapper[5110]: E0122 14:15:37.403930 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.128795 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 14:15:39 crc kubenswrapper[5110]: I0122 14:15:39.522485 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:39 crc kubenswrapper[5110]: I0122 14:15:39.522719 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:39 crc kubenswrapper[5110]: I0122 14:15:39.523487 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:39 crc kubenswrapper[5110]: I0122 14:15:39.523527 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:39 crc kubenswrapper[5110]: I0122 14:15:39.523544 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.524024 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:39 crc kubenswrapper[5110]: I0122 14:15:39.528140 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:39 crc kubenswrapper[5110]: I0122 14:15:39.614466 5110 trace.go:236] Trace[467502992]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 14:15:24.846) (total time: 14768ms): Jan 22 14:15:39 crc kubenswrapper[5110]: Trace[467502992]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 14768ms (14:15:39.614) Jan 22 14:15:39 crc kubenswrapper[5110]: Trace[467502992]: [14.768353949s] [14.768353949s] END Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.614517 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 14:15:39 crc kubenswrapper[5110]: I0122 14:15:39.614714 5110 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.615253 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec28d75ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.179946988 +0000 UTC m=+0.402031347,LastTimestamp:2026-01-22 14:15:20.179946988 +0000 UTC m=+0.402031347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.616857 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f2a71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258701937 +0000 UTC m=+0.480786296,LastTimestamp:2026-01-22 14:15:20.258701937 +0000 UTC m=+0.480786296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.623805 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f7144 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258720068 +0000 UTC m=+0.480804427,LastTimestamp:2026-01-22 14:15:20.258720068 +0000 UTC m=+0.480804427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.629839 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f9d19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258731289 +0000 UTC m=+0.480815648,LastTimestamp:2026-01-22 14:15:20.258731289 +0000 UTC m=+0.480815648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.637775 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ecbc79f77 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.334753655 +0000 UTC m=+0.556838014,LastTimestamp:2026-01-22 14:15:20.334753655 +0000 UTC m=+0.556838014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: I0122 14:15:39.642117 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35484->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 14:15:39 crc kubenswrapper[5110]: I0122 14:15:39.642201 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35484->192.168.126.11:17697: read: connection reset by peer" Jan 22 14:15:39 crc kubenswrapper[5110]: I0122 14:15:39.642828 5110 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 14:15:39 crc kubenswrapper[5110]: I0122 14:15:39.642920 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.646488 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f2a71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f2a71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258701937 +0000 UTC m=+0.480786296,LastTimestamp:2026-01-22 14:15:20.374076438 +0000 UTC m=+0.596160797,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.652039 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f7144\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f7144 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258720068 +0000 UTC m=+0.480804427,LastTimestamp:2026-01-22 14:15:20.37412448 +0000 UTC m=+0.596208849,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.657449 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f9d19\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f9d19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258731289 +0000 UTC m=+0.480815648,LastTimestamp:2026-01-22 14:15:20.37413769 +0000 UTC m=+0.596222059,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.663568 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f2a71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f2a71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258701937 +0000 UTC m=+0.480786296,LastTimestamp:2026-01-22 14:15:20.376038496 +0000 UTC m=+0.598122855,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.670104 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f7144\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f7144 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258720068 +0000 UTC m=+0.480804427,LastTimestamp:2026-01-22 14:15:20.376054776 +0000 UTC m=+0.598139135,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.675549 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f9d19\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f9d19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258731289 +0000 UTC m=+0.480815648,LastTimestamp:2026-01-22 14:15:20.376064087 +0000 UTC m=+0.598148446,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.680478 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f2a71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f2a71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258701937 +0000 UTC m=+0.480786296,LastTimestamp:2026-01-22 14:15:20.376092218 +0000 UTC m=+0.598176577,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.686276 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f7144\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f7144 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258720068 +0000 UTC m=+0.480804427,LastTimestamp:2026-01-22 14:15:20.376102778 +0000 UTC m=+0.598187137,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.691088 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f9d19\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f9d19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258731289 +0000 UTC m=+0.480815648,LastTimestamp:2026-01-22 14:15:20.376112939 +0000 UTC m=+0.598197298,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.696724 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f2a71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f2a71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258701937 +0000 UTC m=+0.480786296,LastTimestamp:2026-01-22 14:15:20.377157606 +0000 UTC m=+0.599241965,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.703897 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f7144\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f7144 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258720068 +0000 UTC m=+0.480804427,LastTimestamp:2026-01-22 14:15:20.377173056 +0000 UTC m=+0.599257415,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.709452 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f9d19\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f9d19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258731289 +0000 UTC m=+0.480815648,LastTimestamp:2026-01-22 14:15:20.377185477 +0000 UTC m=+0.599269836,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.715361 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f2a71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f2a71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258701937 +0000 UTC m=+0.480786296,LastTimestamp:2026-01-22 14:15:20.377310793 +0000 UTC m=+0.599395152,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.720442 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f7144\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f7144 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258720068 +0000 UTC m=+0.480804427,LastTimestamp:2026-01-22 14:15:20.377348024 +0000 UTC m=+0.599432383,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.725129 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f9d19\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f9d19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258731289 +0000 UTC m=+0.480815648,LastTimestamp:2026-01-22 14:15:20.377356255 +0000 UTC m=+0.599440614,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.729291 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f2a71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f2a71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258701937 +0000 UTC m=+0.480786296,LastTimestamp:2026-01-22 14:15:20.378732506 +0000 UTC m=+0.600816865,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.735986 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f2a71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f2a71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258701937 +0000 UTC m=+0.480786296,LastTimestamp:2026-01-22 14:15:20.378743987 +0000 UTC m=+0.600828346,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.744389 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f7144\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f7144 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258720068 +0000 UTC m=+0.480804427,LastTimestamp:2026-01-22 14:15:20.378755907 +0000 UTC m=+0.600840266,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.750194 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f9d19\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f9d19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258731289 +0000 UTC m=+0.480815648,LastTimestamp:2026-01-22 14:15:20.378769188 +0000 UTC m=+0.600853547,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.756862 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d132ec73f7144\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d132ec73f7144 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.258720068 +0000 UTC m=+0.480804427,LastTimestamp:2026-01-22 14:15:20.378848662 +0000 UTC m=+0.600933021,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.764381 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d132ee3a8bd37 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.735382839 +0000 UTC m=+0.957467218,LastTimestamp:2026-01-22 14:15:20.735382839 +0000 UTC m=+0.957467218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.769430 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132ee55d69f9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.764000761 +0000 UTC m=+0.986085120,LastTimestamp:2026-01-22 14:15:20.764000761 +0000 UTC m=+0.986085120,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.774132 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132ee5c3692e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.77068523 +0000 UTC m=+0.992769599,LastTimestamp:2026-01-22 14:15:20.77068523 +0000 UTC m=+0.992769599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.778713 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d132ee7e84992 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.806656402 +0000 UTC m=+1.028740761,LastTimestamp:2026-01-22 14:15:20.806656402 +0000 UTC m=+1.028740761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.783039 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d132ee8472d03 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:20.812875011 +0000 UTC m=+1.034959420,LastTimestamp:2026-01-22 14:15:20.812875011 +0000 UTC m=+1.034959420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.787535 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d132f01680b07 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.234459399 +0000 UTC m=+1.456543758,LastTimestamp:2026-01-22 14:15:21.234459399 +0000 UTC m=+1.456543758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.791878 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d132f01b027f2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.239185394 +0000 UTC m=+1.461269753,LastTimestamp:2026-01-22 14:15:21.239185394 +0000 UTC m=+1.461269753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.797209 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f01bad83a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.239885882 +0000 UTC m=+1.461970241,LastTimestamp:2026-01-22 14:15:21.239885882 +0000 UTC m=+1.461970241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.803719 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132f01bffe31 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.240223281 +0000 UTC m=+1.462307650,LastTimestamp:2026-01-22 14:15:21.240223281 +0000 UTC m=+1.462307650,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.808198 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d132f01bffe27 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.240223271 +0000 UTC m=+1.462307630,LastTimestamp:2026-01-22 14:15:21.240223271 +0000 UTC m=+1.462307630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.816041 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d132f02133916 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.245677846 +0000 UTC m=+1.467762205,LastTimestamp:2026-01-22 14:15:21.245677846 +0000 UTC m=+1.467762205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.820788 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d132f022e2511 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.247442193 +0000 UTC m=+1.469526552,LastTimestamp:2026-01-22 14:15:21.247442193 +0000 UTC m=+1.469526552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.827779 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d132f0264fadc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.251035868 +0000 UTC m=+1.473120227,LastTimestamp:2026-01-22 14:15:21.251035868 +0000 UTC m=+1.473120227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.834682 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132f02b4b098 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.256259736 +0000 UTC m=+1.478344095,LastTimestamp:2026-01-22 14:15:21.256259736 +0000 UTC m=+1.478344095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.841682 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d132f02b5dac4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.256336068 +0000 UTC m=+1.478420427,LastTimestamp:2026-01-22 14:15:21.256336068 +0000 UTC m=+1.478420427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.846136 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f02b6e0a6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.25640311 +0000 UTC m=+1.478487459,LastTimestamp:2026-01-22 14:15:21.25640311 +0000 UTC m=+1.478487459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.850854 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d132f04691d0a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.284861194 +0000 UTC m=+1.506945553,LastTimestamp:2026-01-22 14:15:21.284861194 +0000 UTC m=+1.506945553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.856071 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d132f04aecc9d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.289428125 +0000 UTC m=+1.511512494,LastTimestamp:2026-01-22 14:15:21.289428125 +0000 UTC m=+1.511512494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.860519 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d132f141e1733 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.548379955 +0000 UTC m=+1.770464314,LastTimestamp:2026-01-22 14:15:21.548379955 +0000 UTC m=+1.770464314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.865593 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d132f14292384 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.549104004 +0000 UTC m=+1.771188403,LastTimestamp:2026-01-22 14:15:21.549104004 +0000 UTC m=+1.771188403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.871384 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d132f1430e30b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.549611787 +0000 UTC m=+1.771696156,LastTimestamp:2026-01-22 14:15:21.549611787 +0000 UTC m=+1.771696156,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.875992 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d132f14cebf49 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.559957321 +0000 UTC m=+1.782041690,LastTimestamp:2026-01-22 14:15:21.559957321 +0000 UTC m=+1.782041690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.881561 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d132f14e0cf65 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.561141093 +0000 UTC m=+1.783225462,LastTimestamp:2026-01-22 14:15:21.561141093 +0000 UTC m=+1.783225462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.886365 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d132f14ecbd07 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.561922823 +0000 UTC m=+1.784007182,LastTimestamp:2026-01-22 14:15:21.561922823 +0000 UTC m=+1.784007182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.893567 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d132f150a5d9b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.563864475 +0000 UTC m=+1.785948874,LastTimestamp:2026-01-22 14:15:21.563864475 +0000 UTC m=+1.785948874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.898007 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d132f15263cdf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.565691103 +0000 UTC m=+1.787775472,LastTimestamp:2026-01-22 14:15:21.565691103 +0000 UTC m=+1.787775472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.903955 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d132f23f395f4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.814029812 +0000 UTC m=+2.036114171,LastTimestamp:2026-01-22 14:15:21.814029812 +0000 UTC m=+2.036114171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.917232 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d132f24a39977 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.825565047 +0000 UTC m=+2.047649406,LastTimestamp:2026-01-22 14:15:21.825565047 +0000 UTC m=+2.047649406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.934080 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d132f24b6c1ef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:21.826820591 +0000 UTC m=+2.048904950,LastTimestamp:2026-01-22 14:15:21.826820591 +0000 UTC m=+2.048904950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.936636 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d132f30f0415a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.031915354 +0000 UTC m=+2.253999713,LastTimestamp:2026-01-22 14:15:22.031915354 +0000 UTC m=+2.253999713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.939902 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d132f3104f6b4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.0332725 +0000 UTC m=+2.255356859,LastTimestamp:2026-01-22 14:15:22.0332725 +0000 UTC m=+2.255356859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.942802 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d132f3188030d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.041860877 +0000 UTC m=+2.263945236,LastTimestamp:2026-01-22 14:15:22.041860877 +0000 UTC m=+2.263945236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.948607 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d132f31c3670c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.0457531 +0000 UTC m=+2.267837459,LastTimestamp:2026-01-22 14:15:22.0457531 +0000 UTC m=+2.267837459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.954003 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d132f31d1d913 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.046699795 +0000 UTC m=+2.268784154,LastTimestamp:2026-01-22 14:15:22.046699795 +0000 UTC m=+2.268784154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.958511 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d132f3e71ef43 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.258517827 +0000 UTC m=+2.480602186,LastTimestamp:2026-01-22 14:15:22.258517827 +0000 UTC m=+2.480602186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.963119 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d132f3ef4de3c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.267098684 +0000 UTC m=+2.489183043,LastTimestamp:2026-01-22 14:15:22.267098684 +0000 UTC m=+2.489183043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.969971 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f408c2833 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.293790771 +0000 UTC m=+2.515875130,LastTimestamp:2026-01-22 14:15:22.293790771 +0000 UTC m=+2.515875130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.974981 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132f40a5239a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.295427994 +0000 UTC m=+2.517512353,LastTimestamp:2026-01-22 14:15:22.295427994 +0000 UTC m=+2.517512353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.980641 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132f4e3cf58e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.523481486 +0000 UTC m=+2.745565845,LastTimestamp:2026-01-22 14:15:22.523481486 +0000 UTC m=+2.745565845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.985910 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f4e442cac openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.523954348 +0000 UTC m=+2.746038707,LastTimestamp:2026-01-22 14:15:22.523954348 +0000 UTC m=+2.746038707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.989909 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f4f20702f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.538389551 +0000 UTC m=+2.760473910,LastTimestamp:2026-01-22 14:15:22.538389551 +0000 UTC m=+2.760473910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:39 crc kubenswrapper[5110]: E0122 14:15:39.993863 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f4f328cde openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.539576542 +0000 UTC m=+2.761660901,LastTimestamp:2026-01-22 14:15:22.539576542 +0000 UTC m=+2.761660901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.000321 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132f4f87c1fc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.5451607 +0000 UTC m=+2.767245059,LastTimestamp:2026-01-22 14:15:22.5451607 +0000 UTC m=+2.767245059,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.004466 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f5c36bcc5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.757954757 +0000 UTC m=+2.980039116,LastTimestamp:2026-01-22 14:15:22.757954757 +0000 UTC m=+2.980039116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.008379 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f5ce973b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.769666997 +0000 UTC m=+2.991751356,LastTimestamp:2026-01-22 14:15:22.769666997 +0000 UTC m=+2.991751356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.012300 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f5cfc8822 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.77091741 +0000 UTC m=+2.993001769,LastTimestamp:2026-01-22 14:15:22.77091741 +0000 UTC m=+2.993001769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.018708 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f6a3b7e7a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:22.993147514 +0000 UTC m=+3.215231873,LastTimestamp:2026-01-22 14:15:22.993147514 +0000 UTC m=+3.215231873,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.024447 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f6acf91c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.002851777 +0000 UTC m=+3.224936136,LastTimestamp:2026-01-22 14:15:23.002851777 +0000 UTC m=+3.224936136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.029465 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f6adf43e6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.003880422 +0000 UTC m=+3.225964781,LastTimestamp:2026-01-22 14:15:23.003880422 +0000 UTC m=+3.225964781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.035478 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f765e25eb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.196745195 +0000 UTC m=+3.418829574,LastTimestamp:2026-01-22 14:15:23.196745195 +0000 UTC m=+3.418829574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.040175 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f771c7612 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.209217554 +0000 UTC m=+3.431301913,LastTimestamp:2026-01-22 14:15:23.209217554 +0000 UTC m=+3.431301913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.045289 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f772b3b7d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.210185597 +0000 UTC m=+3.432269956,LastTimestamp:2026-01-22 14:15:23.210185597 +0000 UTC m=+3.432269956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.054455 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132f7e673847 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.331557447 +0000 UTC m=+3.553641806,LastTimestamp:2026-01-22 14:15:23.331557447 +0000 UTC m=+3.553641806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.060576 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f83c6912f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.421692207 +0000 UTC m=+3.643776566,LastTimestamp:2026-01-22 14:15:23.421692207 +0000 UTC m=+3.643776566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.066759 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f84f7ee13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.441704467 +0000 UTC m=+3.663788846,LastTimestamp:2026-01-22 14:15:23.441704467 +0000 UTC m=+3.663788846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.071161 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132f8c660e53 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.566362195 +0000 UTC m=+3.788446554,LastTimestamp:2026-01-22 14:15:23.566362195 +0000 UTC m=+3.788446554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.075426 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132f90c3984f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.639601231 +0000 UTC m=+3.861685590,LastTimestamp:2026-01-22 14:15:23.639601231 +0000 UTC m=+3.861685590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.081362 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132fbb3c089a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:24.35213737 +0000 UTC m=+4.574221769,LastTimestamp:2026-01-22 14:15:24.35213737 +0000 UTC m=+4.574221769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.087540 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132fc891c735 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:24.575860533 +0000 UTC m=+4.797944892,LastTimestamp:2026-01-22 14:15:24.575860533 +0000 UTC m=+4.797944892,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.092392 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132fc9817083 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:24.591566979 +0000 UTC m=+4.813651378,LastTimestamp:2026-01-22 14:15:24.591566979 +0000 UTC m=+4.813651378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.097184 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132fc99d98bb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:24.593412283 +0000 UTC m=+4.815496642,LastTimestamp:2026-01-22 14:15:24.593412283 +0000 UTC m=+4.815496642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.106360 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132fd60c7243 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:24.802003523 +0000 UTC m=+5.024087882,LastTimestamp:2026-01-22 14:15:24.802003523 +0000 UTC m=+5.024087882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.110340 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132fd6f70564 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:24.817376612 +0000 UTC m=+5.039460971,LastTimestamp:2026-01-22 14:15:24.817376612 +0000 UTC m=+5.039460971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.115194 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132fd70c29b5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:24.818762165 +0000 UTC m=+5.040846524,LastTimestamp:2026-01-22 14:15:24.818762165 +0000 UTC m=+5.040846524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.120281 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132fe17820d0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:24.993609936 +0000 UTC m=+5.215694295,LastTimestamp:2026-01-22 14:15:24.993609936 +0000 UTC m=+5.215694295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.125377 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132fe2810ef3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:25.010972403 +0000 UTC m=+5.233056762,LastTimestamp:2026-01-22 14:15:25.010972403 +0000 UTC m=+5.233056762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.132760 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132fe2940430 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:25.012214832 +0000 UTC m=+5.234299191,LastTimestamp:2026-01-22 14:15:25.012214832 +0000 UTC m=+5.234299191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.142208 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132fee1cf81f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:25.205739551 +0000 UTC m=+5.427823910,LastTimestamp:2026-01-22 14:15:25.205739551 +0000 UTC m=+5.427823910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.144566 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132feecaabb6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:25.217123254 +0000 UTC m=+5.439207613,LastTimestamp:2026-01-22 14:15:25.217123254 +0000 UTC m=+5.439207613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.149578 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132feedbcd85 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:25.218246021 +0000 UTC m=+5.440330380,LastTimestamp:2026-01-22 14:15:25.218246021 +0000 UTC m=+5.440330380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.156081 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132ff93ab7b0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:25.392238512 +0000 UTC m=+5.614322871,LastTimestamp:2026-01-22 14:15:25.392238512 +0000 UTC m=+5.614322871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.172793 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.172788 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d132ff9fe1aef openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:25.405043439 +0000 UTC m=+5.627127798,LastTimestamp:2026-01-22 14:15:25.405043439 +0000 UTC m=+5.627127798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.178544 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 22 14:15:40 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-controller-manager-crc.188d13302745e23f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 22 14:15:40 crc kubenswrapper[5110]: body: Jan 22 14:15:40 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:26.164722239 +0000 UTC m=+6.386806598,LastTimestamp:2026-01-22 14:15:26.164722239 +0000 UTC m=+6.386806598,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 14:15:40 crc kubenswrapper[5110]: > Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.184293 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d13302747652c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:26.164821292 +0000 UTC m=+6.386905651,LastTimestamp:2026-01-22 14:15:26.164821292 +0000 UTC m=+6.386905651,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.190264 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 14:15:40 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188d1332191461d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 22 14:15:40 crc kubenswrapper[5110]: body: Jan 22 14:15:40 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:34.516531669 +0000 UTC m=+14.738616018,LastTimestamp:2026-01-22 14:15:34.516531669 +0000 UTC m=+14.738616018,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 14:15:40 crc kubenswrapper[5110]: > Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.194428 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d13321915a0a1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:34.516613281 +0000 UTC m=+14.738697640,LastTimestamp:2026-01-22 14:15:34.516613281 +0000 UTC m=+14.738697640,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.198979 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 14:15:40 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188d13321f467207 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 22 14:15:40 crc kubenswrapper[5110]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 14:15:40 crc kubenswrapper[5110]: Jan 22 14:15:40 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:34.620475911 +0000 UTC m=+14.842560270,LastTimestamp:2026-01-22 14:15:34.620475911 +0000 UTC m=+14.842560270,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 14:15:40 crc kubenswrapper[5110]: > Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.203010 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d13321f478301 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:34.620545793 +0000 UTC m=+14.842630152,LastTimestamp:2026-01-22 14:15:34.620545793 +0000 UTC m=+14.842630152,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.207456 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.188d13302745e23f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 22 14:15:40 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-controller-manager-crc.188d13302745e23f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 22 14:15:40 crc kubenswrapper[5110]: body: Jan 22 14:15:40 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:26.164722239 +0000 UTC m=+6.386806598,LastTimestamp:2026-01-22 14:15:36.166422097 +0000 UTC m=+16.388506456,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 14:15:40 crc kubenswrapper[5110]: > Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.214420 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.188d13302747652c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d13302747652c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:26.164821292 +0000 UTC m=+6.386905651,LastTimestamp:2026-01-22 14:15:36.166469769 +0000 UTC m=+16.388554128,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.219673 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 14:15:40 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188d13334a975f3f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:35484->192.168.126.11:17697: read: connection reset by peer Jan 22 14:15:40 crc kubenswrapper[5110]: body: Jan 22 14:15:40 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:39.642167103 +0000 UTC m=+19.864251482,LastTimestamp:2026-01-22 14:15:39.642167103 +0000 UTC m=+19.864251482,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 14:15:40 crc kubenswrapper[5110]: > Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.227405 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d13334a985743 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35484->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:39.642230595 +0000 UTC m=+19.864314974,LastTimestamp:2026-01-22 14:15:39.642230595 +0000 UTC m=+19.864314974,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.235892 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 14:15:40 crc kubenswrapper[5110]: &Event{ObjectMeta:{kube-apiserver-crc.188d13334aa26653 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 22 14:15:40 crc kubenswrapper[5110]: body: Jan 22 14:15:40 crc kubenswrapper[5110]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:39.642889811 +0000 UTC m=+19.864974210,LastTimestamp:2026-01-22 14:15:39.642889811 +0000 UTC m=+19.864974210,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 14:15:40 crc kubenswrapper[5110]: > Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.241055 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d13334aa37a8a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:39.642960522 +0000 UTC m=+19.865044931,LastTimestamp:2026-01-22 14:15:39.642960522 +0000 UTC m=+19.865044931,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.303466 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.303666 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.305460 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.305489 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.305498 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.305751 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.327020 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.401168 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.411807 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.413513 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d80b5d8b8dbf89a5c7baed724d422f54f1f119f44be0e204f1a8237a5022ffbd" exitCode=255 Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.413634 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d80b5d8b8dbf89a5c7baed724d422f54f1f119f44be0e204f1a8237a5022ffbd"} Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.413822 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.414643 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.414675 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.414686 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.415055 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:40 crc kubenswrapper[5110]: I0122 14:15:40.415318 5110 scope.go:117] "RemoveContainer" containerID="d80b5d8b8dbf89a5c7baed724d422f54f1f119f44be0e204f1a8237a5022ffbd" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.422972 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d132f772b3b7d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f772b3b7d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.210185597 +0000 UTC m=+3.432269956,LastTimestamp:2026-01-22 14:15:40.416789699 +0000 UTC m=+20.638874058,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.628730 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d132f83c6912f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f83c6912f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.421692207 +0000 UTC m=+3.643776566,LastTimestamp:2026-01-22 14:15:40.621161326 +0000 UTC m=+20.843245685,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:40 crc kubenswrapper[5110]: E0122 14:15:40.635319 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d132f84f7ee13\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f84f7ee13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.441704467 +0000 UTC m=+3.663788846,LastTimestamp:2026-01-22 14:15:40.631207661 +0000 UTC m=+20.853292020,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:41 crc kubenswrapper[5110]: I0122 14:15:41.169882 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:41 crc kubenswrapper[5110]: I0122 14:15:41.420456 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 14:15:41 crc kubenswrapper[5110]: I0122 14:15:41.422540 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"41a86749903e9d9dc315af20522205b8203fd310d1783b36c9487da50459936d"} Jan 22 14:15:41 crc kubenswrapper[5110]: I0122 14:15:41.422759 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:41 crc kubenswrapper[5110]: I0122 14:15:41.423395 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:41 crc kubenswrapper[5110]: I0122 14:15:41.423437 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:41 crc kubenswrapper[5110]: I0122 14:15:41.423448 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:41 crc kubenswrapper[5110]: E0122 14:15:41.423809 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:42 crc kubenswrapper[5110]: I0122 14:15:42.169390 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:42 crc kubenswrapper[5110]: I0122 14:15:42.425975 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 14:15:42 crc kubenswrapper[5110]: I0122 14:15:42.426824 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 14:15:42 crc kubenswrapper[5110]: I0122 14:15:42.428648 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="41a86749903e9d9dc315af20522205b8203fd310d1783b36c9487da50459936d" exitCode=255 Jan 22 14:15:42 crc kubenswrapper[5110]: I0122 14:15:42.428704 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"41a86749903e9d9dc315af20522205b8203fd310d1783b36c9487da50459936d"} Jan 22 14:15:42 crc kubenswrapper[5110]: I0122 14:15:42.428737 5110 scope.go:117] "RemoveContainer" containerID="d80b5d8b8dbf89a5c7baed724d422f54f1f119f44be0e204f1a8237a5022ffbd" Jan 22 14:15:42 crc kubenswrapper[5110]: I0122 14:15:42.429001 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:42 crc kubenswrapper[5110]: I0122 14:15:42.429702 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:42 crc kubenswrapper[5110]: I0122 14:15:42.429800 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:42 crc kubenswrapper[5110]: I0122 14:15:42.430315 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:42 crc kubenswrapper[5110]: E0122 14:15:42.432291 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:42 crc kubenswrapper[5110]: I0122 14:15:42.432952 5110 scope.go:117] "RemoveContainer" containerID="41a86749903e9d9dc315af20522205b8203fd310d1783b36c9487da50459936d" Jan 22 14:15:42 crc kubenswrapper[5110]: E0122 14:15:42.433449 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:15:42 crc kubenswrapper[5110]: E0122 14:15:42.442151 5110 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1333f0f5acb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:42.433365171 +0000 UTC m=+22.655449540,LastTimestamp:2026-01-22 14:15:42.433365171 +0000 UTC m=+22.655449540,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:42 crc kubenswrapper[5110]: E0122 14:15:42.802336 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:15:42 crc kubenswrapper[5110]: I0122 14:15:42.984829 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.050139 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.051139 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.051219 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.051244 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.051290 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:15:43 crc kubenswrapper[5110]: E0122 14:15:43.061672 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.169571 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.171376 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.171582 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.172653 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.172686 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.172699 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5110]: E0122 14:15:43.173040 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.177067 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.433887 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.436188 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.436304 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.436933 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.436984 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.437001 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.437039 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.437099 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.437126 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:43 crc kubenswrapper[5110]: E0122 14:15:43.437503 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:43 crc kubenswrapper[5110]: I0122 14:15:43.437814 5110 scope.go:117] "RemoveContainer" containerID="41a86749903e9d9dc315af20522205b8203fd310d1783b36c9487da50459936d" Jan 22 14:15:43 crc kubenswrapper[5110]: E0122 14:15:43.437842 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:43 crc kubenswrapper[5110]: E0122 14:15:43.438074 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:15:43 crc kubenswrapper[5110]: E0122 14:15:43.445297 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1333f0f5acb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1333f0f5acb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:42.433365171 +0000 UTC m=+22.655449540,LastTimestamp:2026-01-22 14:15:43.438038143 +0000 UTC m=+23.660122512,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:44 crc kubenswrapper[5110]: I0122 14:15:44.169319 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:44 crc kubenswrapper[5110]: I0122 14:15:44.437969 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:44 crc kubenswrapper[5110]: I0122 14:15:44.438535 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:44 crc kubenswrapper[5110]: I0122 14:15:44.438570 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:44 crc kubenswrapper[5110]: I0122 14:15:44.438581 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:44 crc kubenswrapper[5110]: E0122 14:15:44.438971 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:44 crc kubenswrapper[5110]: I0122 14:15:44.439195 5110 scope.go:117] "RemoveContainer" containerID="41a86749903e9d9dc315af20522205b8203fd310d1783b36c9487da50459936d" Jan 22 14:15:44 crc kubenswrapper[5110]: E0122 14:15:44.439364 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:15:44 crc kubenswrapper[5110]: E0122 14:15:44.443471 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1333f0f5acb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1333f0f5acb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:42.433365171 +0000 UTC m=+22.655449540,LastTimestamp:2026-01-22 14:15:44.439333375 +0000 UTC m=+24.661417734,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:45 crc kubenswrapper[5110]: I0122 14:15:45.170944 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:45 crc kubenswrapper[5110]: E0122 14:15:45.933779 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 14:15:46 crc kubenswrapper[5110]: E0122 14:15:46.064542 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 14:15:46 crc kubenswrapper[5110]: I0122 14:15:46.171471 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:46 crc kubenswrapper[5110]: E0122 14:15:46.418109 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 14:15:47 crc kubenswrapper[5110]: I0122 14:15:47.169233 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:48 crc kubenswrapper[5110]: I0122 14:15:48.170221 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:49 crc kubenswrapper[5110]: I0122 14:15:49.171854 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:49 crc kubenswrapper[5110]: E0122 14:15:49.813097 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:15:50 crc kubenswrapper[5110]: I0122 14:15:50.062797 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:50 crc kubenswrapper[5110]: I0122 14:15:50.063933 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:50 crc kubenswrapper[5110]: I0122 14:15:50.063976 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:50 crc kubenswrapper[5110]: I0122 14:15:50.063988 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:50 crc kubenswrapper[5110]: I0122 14:15:50.064013 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:15:50 crc kubenswrapper[5110]: E0122 14:15:50.078114 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:15:50 crc kubenswrapper[5110]: I0122 14:15:50.165779 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:50 crc kubenswrapper[5110]: E0122 14:15:50.327235 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:15:51 crc kubenswrapper[5110]: I0122 14:15:51.171689 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:51 crc kubenswrapper[5110]: I0122 14:15:51.423569 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:15:51 crc kubenswrapper[5110]: I0122 14:15:51.423972 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:51 crc kubenswrapper[5110]: I0122 14:15:51.424919 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:51 crc kubenswrapper[5110]: I0122 14:15:51.424953 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:51 crc kubenswrapper[5110]: I0122 14:15:51.424964 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:51 crc kubenswrapper[5110]: E0122 14:15:51.425326 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:15:51 crc kubenswrapper[5110]: I0122 14:15:51.425602 5110 scope.go:117] "RemoveContainer" containerID="41a86749903e9d9dc315af20522205b8203fd310d1783b36c9487da50459936d" Jan 22 14:15:51 crc kubenswrapper[5110]: E0122 14:15:51.425861 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:15:51 crc kubenswrapper[5110]: E0122 14:15:51.432366 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1333f0f5acb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1333f0f5acb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:42.433365171 +0000 UTC m=+22.655449540,LastTimestamp:2026-01-22 14:15:51.425823644 +0000 UTC m=+31.647908013,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:15:52 crc kubenswrapper[5110]: I0122 14:15:52.170195 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:52 crc kubenswrapper[5110]: E0122 14:15:52.928645 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 14:15:53 crc kubenswrapper[5110]: I0122 14:15:53.170109 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:54 crc kubenswrapper[5110]: I0122 14:15:54.170817 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:55 crc kubenswrapper[5110]: I0122 14:15:55.172063 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:56 crc kubenswrapper[5110]: I0122 14:15:56.171831 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:56 crc kubenswrapper[5110]: E0122 14:15:56.508477 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 14:15:56 crc kubenswrapper[5110]: E0122 14:15:56.819711 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:15:57 crc kubenswrapper[5110]: I0122 14:15:57.079124 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:15:57 crc kubenswrapper[5110]: I0122 14:15:57.080504 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:15:57 crc kubenswrapper[5110]: I0122 14:15:57.080579 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:15:57 crc kubenswrapper[5110]: I0122 14:15:57.080609 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:15:57 crc kubenswrapper[5110]: I0122 14:15:57.080697 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:15:57 crc kubenswrapper[5110]: E0122 14:15:57.090712 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:15:57 crc kubenswrapper[5110]: I0122 14:15:57.172516 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:58 crc kubenswrapper[5110]: I0122 14:15:58.170176 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:15:59 crc kubenswrapper[5110]: I0122 14:15:59.172258 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:00 crc kubenswrapper[5110]: I0122 14:16:00.172123 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:00 crc kubenswrapper[5110]: E0122 14:16:00.327881 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:16:01 crc kubenswrapper[5110]: I0122 14:16:01.170154 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:02 crc kubenswrapper[5110]: I0122 14:16:02.169242 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:03 crc kubenswrapper[5110]: I0122 14:16:03.175726 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:03 crc kubenswrapper[5110]: I0122 14:16:03.272656 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:03 crc kubenswrapper[5110]: I0122 14:16:03.273796 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:03 crc kubenswrapper[5110]: I0122 14:16:03.274004 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:03 crc kubenswrapper[5110]: I0122 14:16:03.274082 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:03 crc kubenswrapper[5110]: E0122 14:16:03.274713 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:03 crc kubenswrapper[5110]: I0122 14:16:03.275317 5110 scope.go:117] "RemoveContainer" containerID="41a86749903e9d9dc315af20522205b8203fd310d1783b36c9487da50459936d" Jan 22 14:16:03 crc kubenswrapper[5110]: E0122 14:16:03.283155 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d132f772b3b7d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f772b3b7d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.210185597 +0000 UTC m=+3.432269956,LastTimestamp:2026-01-22 14:16:03.276999398 +0000 UTC m=+43.499083827,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:16:03 crc kubenswrapper[5110]: E0122 14:16:03.585416 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d132f83c6912f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f83c6912f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.421692207 +0000 UTC m=+3.643776566,LastTimestamp:2026-01-22 14:16:03.576877357 +0000 UTC m=+43.798962276,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:16:03 crc kubenswrapper[5110]: E0122 14:16:03.598750 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d132f84f7ee13\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d132f84f7ee13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:23.441704467 +0000 UTC m=+3.663788846,LastTimestamp:2026-01-22 14:16:03.591186055 +0000 UTC m=+43.813270414,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:16:03 crc kubenswrapper[5110]: E0122 14:16:03.827484 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:16:04 crc kubenswrapper[5110]: I0122 14:16:04.091489 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:04 crc kubenswrapper[5110]: I0122 14:16:04.092557 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:04 crc kubenswrapper[5110]: I0122 14:16:04.092606 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:04 crc kubenswrapper[5110]: I0122 14:16:04.092656 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:04 crc kubenswrapper[5110]: I0122 14:16:04.092693 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:16:04 crc kubenswrapper[5110]: E0122 14:16:04.105274 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:16:04 crc kubenswrapper[5110]: I0122 14:16:04.170937 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:04 crc kubenswrapper[5110]: I0122 14:16:04.493277 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 14:16:04 crc kubenswrapper[5110]: I0122 14:16:04.495211 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"522df8be05b997917cef2629dd8b6a2181a83f54e5d57a79f75a56dfaa8c19a8"} Jan 22 14:16:04 crc kubenswrapper[5110]: I0122 14:16:04.495433 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:04 crc kubenswrapper[5110]: I0122 14:16:04.496006 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:04 crc kubenswrapper[5110]: I0122 14:16:04.496042 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:04 crc kubenswrapper[5110]: I0122 14:16:04.496057 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:04 crc kubenswrapper[5110]: E0122 14:16:04.496392 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:05 crc kubenswrapper[5110]: I0122 14:16:05.170295 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:05 crc kubenswrapper[5110]: E0122 14:16:05.370780 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 14:16:05 crc kubenswrapper[5110]: I0122 14:16:05.499957 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 14:16:05 crc kubenswrapper[5110]: I0122 14:16:05.500419 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 14:16:05 crc kubenswrapper[5110]: I0122 14:16:05.502250 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="522df8be05b997917cef2629dd8b6a2181a83f54e5d57a79f75a56dfaa8c19a8" exitCode=255 Jan 22 14:16:05 crc kubenswrapper[5110]: I0122 14:16:05.502306 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"522df8be05b997917cef2629dd8b6a2181a83f54e5d57a79f75a56dfaa8c19a8"} Jan 22 14:16:05 crc kubenswrapper[5110]: I0122 14:16:05.502350 5110 scope.go:117] "RemoveContainer" containerID="41a86749903e9d9dc315af20522205b8203fd310d1783b36c9487da50459936d" Jan 22 14:16:05 crc kubenswrapper[5110]: I0122 14:16:05.502575 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:05 crc kubenswrapper[5110]: I0122 14:16:05.503268 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:05 crc kubenswrapper[5110]: I0122 14:16:05.503323 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:05 crc kubenswrapper[5110]: I0122 14:16:05.503339 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:05 crc kubenswrapper[5110]: E0122 14:16:05.503845 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:05 crc kubenswrapper[5110]: I0122 14:16:05.504161 5110 scope.go:117] "RemoveContainer" containerID="522df8be05b997917cef2629dd8b6a2181a83f54e5d57a79f75a56dfaa8c19a8" Jan 22 14:16:05 crc kubenswrapper[5110]: E0122 14:16:05.504394 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:16:05 crc kubenswrapper[5110]: E0122 14:16:05.509425 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1333f0f5acb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1333f0f5acb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:42.433365171 +0000 UTC m=+22.655449540,LastTimestamp:2026-01-22 14:16:05.504356197 +0000 UTC m=+45.726440546,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:16:06 crc kubenswrapper[5110]: I0122 14:16:06.173053 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:06 crc kubenswrapper[5110]: I0122 14:16:06.507239 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 14:16:07 crc kubenswrapper[5110]: I0122 14:16:07.173082 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:08 crc kubenswrapper[5110]: I0122 14:16:08.172055 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:09 crc kubenswrapper[5110]: I0122 14:16:09.172741 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:10 crc kubenswrapper[5110]: I0122 14:16:10.171273 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:10 crc kubenswrapper[5110]: E0122 14:16:10.329247 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:16:10 crc kubenswrapper[5110]: E0122 14:16:10.433877 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 14:16:10 crc kubenswrapper[5110]: E0122 14:16:10.835427 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:16:11 crc kubenswrapper[5110]: I0122 14:16:11.105378 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:11 crc kubenswrapper[5110]: I0122 14:16:11.106971 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:11 crc kubenswrapper[5110]: I0122 14:16:11.107005 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:11 crc kubenswrapper[5110]: I0122 14:16:11.107015 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:11 crc kubenswrapper[5110]: I0122 14:16:11.107035 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:16:11 crc kubenswrapper[5110]: E0122 14:16:11.118202 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:16:11 crc kubenswrapper[5110]: I0122 14:16:11.170010 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:11 crc kubenswrapper[5110]: E0122 14:16:11.699165 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 14:16:12 crc kubenswrapper[5110]: I0122 14:16:12.170359 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:12 crc kubenswrapper[5110]: I0122 14:16:12.984960 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:16:12 crc kubenswrapper[5110]: I0122 14:16:12.985379 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:12 crc kubenswrapper[5110]: I0122 14:16:12.986793 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:12 crc kubenswrapper[5110]: I0122 14:16:12.986857 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:12 crc kubenswrapper[5110]: I0122 14:16:12.986883 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:12 crc kubenswrapper[5110]: E0122 14:16:12.987546 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:12 crc kubenswrapper[5110]: I0122 14:16:12.988075 5110 scope.go:117] "RemoveContainer" containerID="522df8be05b997917cef2629dd8b6a2181a83f54e5d57a79f75a56dfaa8c19a8" Jan 22 14:16:12 crc kubenswrapper[5110]: E0122 14:16:12.988506 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:16:12 crc kubenswrapper[5110]: E0122 14:16:12.997942 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1333f0f5acb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1333f0f5acb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:42.433365171 +0000 UTC m=+22.655449540,LastTimestamp:2026-01-22 14:16:12.988433615 +0000 UTC m=+53.210518014,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:16:13 crc kubenswrapper[5110]: I0122 14:16:13.172231 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:13 crc kubenswrapper[5110]: I0122 14:16:13.335497 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:16:13 crc kubenswrapper[5110]: I0122 14:16:13.335713 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:13 crc kubenswrapper[5110]: I0122 14:16:13.336496 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:13 crc kubenswrapper[5110]: I0122 14:16:13.336531 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:13 crc kubenswrapper[5110]: I0122 14:16:13.336541 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:13 crc kubenswrapper[5110]: E0122 14:16:13.336820 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:14 crc kubenswrapper[5110]: I0122 14:16:14.170210 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:14 crc kubenswrapper[5110]: I0122 14:16:14.496467 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:16:14 crc kubenswrapper[5110]: I0122 14:16:14.496727 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:14 crc kubenswrapper[5110]: I0122 14:16:14.497703 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:14 crc kubenswrapper[5110]: I0122 14:16:14.497907 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:14 crc kubenswrapper[5110]: I0122 14:16:14.498038 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:14 crc kubenswrapper[5110]: E0122 14:16:14.498726 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:14 crc kubenswrapper[5110]: I0122 14:16:14.499232 5110 scope.go:117] "RemoveContainer" containerID="522df8be05b997917cef2629dd8b6a2181a83f54e5d57a79f75a56dfaa8c19a8" Jan 22 14:16:14 crc kubenswrapper[5110]: E0122 14:16:14.499703 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:16:14 crc kubenswrapper[5110]: E0122 14:16:14.505431 5110 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d1333f0f5acb3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d1333f0f5acb3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:15:42.433365171 +0000 UTC m=+22.655449540,LastTimestamp:2026-01-22 14:16:14.499595652 +0000 UTC m=+54.721680041,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:16:15 crc kubenswrapper[5110]: I0122 14:16:15.172818 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:16 crc kubenswrapper[5110]: I0122 14:16:16.170012 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:17 crc kubenswrapper[5110]: I0122 14:16:17.172921 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:17 crc kubenswrapper[5110]: E0122 14:16:17.842177 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:16:18 crc kubenswrapper[5110]: I0122 14:16:18.148794 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:18 crc kubenswrapper[5110]: I0122 14:16:18.149778 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:18 crc kubenswrapper[5110]: I0122 14:16:18.149822 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:18 crc kubenswrapper[5110]: I0122 14:16:18.149834 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:18 crc kubenswrapper[5110]: I0122 14:16:18.149859 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:16:18 crc kubenswrapper[5110]: E0122 14:16:18.159714 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:16:18 crc kubenswrapper[5110]: I0122 14:16:18.169929 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:19 crc kubenswrapper[5110]: I0122 14:16:19.172505 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:19 crc kubenswrapper[5110]: E0122 14:16:19.328169 5110 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 14:16:20 crc kubenswrapper[5110]: I0122 14:16:20.173391 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:20 crc kubenswrapper[5110]: E0122 14:16:20.330525 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:16:21 crc kubenswrapper[5110]: I0122 14:16:21.169365 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:22 crc kubenswrapper[5110]: I0122 14:16:22.173240 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:23 crc kubenswrapper[5110]: I0122 14:16:23.172043 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:24 crc kubenswrapper[5110]: I0122 14:16:24.171752 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:24 crc kubenswrapper[5110]: E0122 14:16:24.847178 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 14:16:25 crc kubenswrapper[5110]: I0122 14:16:25.160212 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:25 crc kubenswrapper[5110]: I0122 14:16:25.161429 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:25 crc kubenswrapper[5110]: I0122 14:16:25.161483 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:25 crc kubenswrapper[5110]: I0122 14:16:25.161503 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:25 crc kubenswrapper[5110]: I0122 14:16:25.161534 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:16:25 crc kubenswrapper[5110]: I0122 14:16:25.166984 5110 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 14:16:25 crc kubenswrapper[5110]: E0122 14:16:25.169293 5110 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 14:16:25 crc kubenswrapper[5110]: I0122 14:16:25.425280 5110 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-gn2dq" Jan 22 14:16:25 crc kubenswrapper[5110]: I0122 14:16:25.432929 5110 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-gn2dq" Jan 22 14:16:25 crc kubenswrapper[5110]: I0122 14:16:25.533041 5110 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.061374 5110 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.272769 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.272866 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.273845 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.273962 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.274057 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.274420 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.274485 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.274497 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:26 crc kubenswrapper[5110]: E0122 14:16:26.275161 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.275529 5110 scope.go:117] "RemoveContainer" containerID="522df8be05b997917cef2629dd8b6a2181a83f54e5d57a79f75a56dfaa8c19a8" Jan 22 14:16:26 crc kubenswrapper[5110]: E0122 14:16:26.276094 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.434601 5110 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-21 14:11:25 +0000 UTC" deadline="2026-02-18 07:32:56.397917029 +0000 UTC" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.434656 5110 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="641h16m29.96326609s" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.559376 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.560333 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07"} Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.560523 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.561131 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.561171 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:26 crc kubenswrapper[5110]: I0122 14:16:26.561181 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:26 crc kubenswrapper[5110]: E0122 14:16:26.561562 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:28 crc kubenswrapper[5110]: I0122 14:16:28.566564 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 14:16:28 crc kubenswrapper[5110]: I0122 14:16:28.567540 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 14:16:28 crc kubenswrapper[5110]: I0122 14:16:28.568937 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07" exitCode=255 Jan 22 14:16:28 crc kubenswrapper[5110]: I0122 14:16:28.568994 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07"} Jan 22 14:16:28 crc kubenswrapper[5110]: I0122 14:16:28.569040 5110 scope.go:117] "RemoveContainer" containerID="522df8be05b997917cef2629dd8b6a2181a83f54e5d57a79f75a56dfaa8c19a8" Jan 22 14:16:28 crc kubenswrapper[5110]: I0122 14:16:28.569370 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:28 crc kubenswrapper[5110]: I0122 14:16:28.570118 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:28 crc kubenswrapper[5110]: I0122 14:16:28.570168 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:28 crc kubenswrapper[5110]: I0122 14:16:28.570185 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:28 crc kubenswrapper[5110]: E0122 14:16:28.570790 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:28 crc kubenswrapper[5110]: I0122 14:16:28.571394 5110 scope.go:117] "RemoveContainer" containerID="d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07" Jan 22 14:16:28 crc kubenswrapper[5110]: E0122 14:16:28.571916 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:16:29 crc kubenswrapper[5110]: I0122 14:16:29.574364 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 14:16:30 crc kubenswrapper[5110]: E0122 14:16:30.331669 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.169894 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.171593 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.171661 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.171672 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.171788 5110 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.180011 5110 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.180304 5110 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.180333 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.183579 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.183654 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.183672 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.183694 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.183709 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:32Z","lastTransitionTime":"2026-01-22T14:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.198600 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"852a491e-9e7b-4f26-a7f5-3ca241db6d4a\\\",\\\"systemUUID\\\":\\\"c7f66b8f-fb8d-43bb-91c4-80fc1b273d77\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.206130 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.206170 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.206184 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.206200 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.206212 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:32Z","lastTransitionTime":"2026-01-22T14:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.215240 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"852a491e-9e7b-4f26-a7f5-3ca241db6d4a\\\",\\\"systemUUID\\\":\\\"c7f66b8f-fb8d-43bb-91c4-80fc1b273d77\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.222681 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.222716 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.222727 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.222741 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.222750 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:32Z","lastTransitionTime":"2026-01-22T14:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.230600 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"852a491e-9e7b-4f26-a7f5-3ca241db6d4a\\\",\\\"systemUUID\\\":\\\"c7f66b8f-fb8d-43bb-91c4-80fc1b273d77\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.236255 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.236423 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.236706 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.236793 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.236880 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:32Z","lastTransitionTime":"2026-01-22T14:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.246079 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"852a491e-9e7b-4f26-a7f5-3ca241db6d4a\\\",\\\"systemUUID\\\":\\\"c7f66b8f-fb8d-43bb-91c4-80fc1b273d77\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.246462 5110 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.246544 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.346777 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.447602 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.548103 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.648594 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.748986 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.849094 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.950206 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.984028 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.984485 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.985992 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.986040 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.986062 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.986584 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:32 crc kubenswrapper[5110]: I0122 14:16:32.986866 5110 scope.go:117] "RemoveContainer" containerID="d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07" Jan 22 14:16:32 crc kubenswrapper[5110]: E0122 14:16:32.987094 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:16:33 crc kubenswrapper[5110]: E0122 14:16:33.051043 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:33 crc kubenswrapper[5110]: E0122 14:16:33.151997 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:33 crc kubenswrapper[5110]: E0122 14:16:33.253206 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:33 crc kubenswrapper[5110]: E0122 14:16:33.353933 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:33 crc kubenswrapper[5110]: E0122 14:16:33.454454 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:33 crc kubenswrapper[5110]: E0122 14:16:33.555428 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:33 crc kubenswrapper[5110]: E0122 14:16:33.655811 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:33 crc kubenswrapper[5110]: E0122 14:16:33.756158 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:33 crc kubenswrapper[5110]: E0122 14:16:33.856651 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:33 crc kubenswrapper[5110]: E0122 14:16:33.957295 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:34 crc kubenswrapper[5110]: E0122 14:16:34.057881 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:34 crc kubenswrapper[5110]: E0122 14:16:34.157998 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:34 crc kubenswrapper[5110]: E0122 14:16:34.258506 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:34 crc kubenswrapper[5110]: E0122 14:16:34.358826 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:34 crc kubenswrapper[5110]: E0122 14:16:34.459178 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:34 crc kubenswrapper[5110]: E0122 14:16:34.560132 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:34 crc kubenswrapper[5110]: E0122 14:16:34.661664 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:34 crc kubenswrapper[5110]: E0122 14:16:34.762183 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:34 crc kubenswrapper[5110]: E0122 14:16:34.863462 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:34 crc kubenswrapper[5110]: E0122 14:16:34.964745 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:35 crc kubenswrapper[5110]: E0122 14:16:35.065725 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:35 crc kubenswrapper[5110]: E0122 14:16:35.167071 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:35 crc kubenswrapper[5110]: E0122 14:16:35.267502 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:35 crc kubenswrapper[5110]: E0122 14:16:35.368745 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:35 crc kubenswrapper[5110]: E0122 14:16:35.469384 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:35 crc kubenswrapper[5110]: E0122 14:16:35.569841 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:35 crc kubenswrapper[5110]: E0122 14:16:35.670908 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:35 crc kubenswrapper[5110]: E0122 14:16:35.771606 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:35 crc kubenswrapper[5110]: E0122 14:16:35.872988 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:35 crc kubenswrapper[5110]: E0122 14:16:35.973684 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:36 crc kubenswrapper[5110]: E0122 14:16:36.074479 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:36 crc kubenswrapper[5110]: E0122 14:16:36.175324 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:36 crc kubenswrapper[5110]: E0122 14:16:36.276692 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:36 crc kubenswrapper[5110]: E0122 14:16:36.377424 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:36 crc kubenswrapper[5110]: E0122 14:16:36.478579 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:36 crc kubenswrapper[5110]: I0122 14:16:36.560915 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:16:36 crc kubenswrapper[5110]: I0122 14:16:36.561152 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:36 crc kubenswrapper[5110]: I0122 14:16:36.562074 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:36 crc kubenswrapper[5110]: I0122 14:16:36.562112 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:36 crc kubenswrapper[5110]: I0122 14:16:36.562129 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:36 crc kubenswrapper[5110]: E0122 14:16:36.562662 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:36 crc kubenswrapper[5110]: I0122 14:16:36.562921 5110 scope.go:117] "RemoveContainer" containerID="d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07" Jan 22 14:16:36 crc kubenswrapper[5110]: E0122 14:16:36.563170 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:16:36 crc kubenswrapper[5110]: E0122 14:16:36.578933 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:36 crc kubenswrapper[5110]: E0122 14:16:36.679336 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:36 crc kubenswrapper[5110]: E0122 14:16:36.780462 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:36 crc kubenswrapper[5110]: E0122 14:16:36.881217 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:36 crc kubenswrapper[5110]: E0122 14:16:36.981951 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:37 crc kubenswrapper[5110]: E0122 14:16:37.082986 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:37 crc kubenswrapper[5110]: E0122 14:16:37.183338 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:37 crc kubenswrapper[5110]: E0122 14:16:37.284290 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:37 crc kubenswrapper[5110]: E0122 14:16:37.384410 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:37 crc kubenswrapper[5110]: E0122 14:16:37.485395 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:37 crc kubenswrapper[5110]: E0122 14:16:37.586042 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:37 crc kubenswrapper[5110]: E0122 14:16:37.687230 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:37 crc kubenswrapper[5110]: E0122 14:16:37.787495 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:37 crc kubenswrapper[5110]: E0122 14:16:37.888688 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:37 crc kubenswrapper[5110]: E0122 14:16:37.989858 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:38 crc kubenswrapper[5110]: E0122 14:16:38.090918 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:38 crc kubenswrapper[5110]: E0122 14:16:38.191929 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:38 crc kubenswrapper[5110]: E0122 14:16:38.292880 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:38 crc kubenswrapper[5110]: E0122 14:16:38.393439 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:38 crc kubenswrapper[5110]: E0122 14:16:38.493889 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:38 crc kubenswrapper[5110]: E0122 14:16:38.594454 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:38 crc kubenswrapper[5110]: E0122 14:16:38.694744 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:38 crc kubenswrapper[5110]: E0122 14:16:38.795879 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:38 crc kubenswrapper[5110]: E0122 14:16:38.896424 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:38 crc kubenswrapper[5110]: E0122 14:16:38.997195 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:39 crc kubenswrapper[5110]: E0122 14:16:39.097993 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:39 crc kubenswrapper[5110]: E0122 14:16:39.198426 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:39 crc kubenswrapper[5110]: E0122 14:16:39.299265 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:39 crc kubenswrapper[5110]: E0122 14:16:39.399731 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:39 crc kubenswrapper[5110]: E0122 14:16:39.500133 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:39 crc kubenswrapper[5110]: E0122 14:16:39.600281 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:39 crc kubenswrapper[5110]: E0122 14:16:39.701446 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:39 crc kubenswrapper[5110]: E0122 14:16:39.802369 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:39 crc kubenswrapper[5110]: E0122 14:16:39.903473 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:40 crc kubenswrapper[5110]: E0122 14:16:40.004419 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:40 crc kubenswrapper[5110]: E0122 14:16:40.105184 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:40 crc kubenswrapper[5110]: E0122 14:16:40.205713 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:40 crc kubenswrapper[5110]: E0122 14:16:40.306642 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:40 crc kubenswrapper[5110]: E0122 14:16:40.332874 5110 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 14:16:40 crc kubenswrapper[5110]: E0122 14:16:40.406993 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:40 crc kubenswrapper[5110]: E0122 14:16:40.508064 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:40 crc kubenswrapper[5110]: E0122 14:16:40.609124 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:40 crc kubenswrapper[5110]: E0122 14:16:40.709441 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:40 crc kubenswrapper[5110]: E0122 14:16:40.809681 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:40 crc kubenswrapper[5110]: E0122 14:16:40.910225 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:41 crc kubenswrapper[5110]: E0122 14:16:41.010478 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:41 crc kubenswrapper[5110]: E0122 14:16:41.111099 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:41 crc kubenswrapper[5110]: E0122 14:16:41.211775 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:41 crc kubenswrapper[5110]: E0122 14:16:41.311899 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:41 crc kubenswrapper[5110]: E0122 14:16:41.412295 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:41 crc kubenswrapper[5110]: E0122 14:16:41.512757 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:41 crc kubenswrapper[5110]: E0122 14:16:41.612892 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:41 crc kubenswrapper[5110]: I0122 14:16:41.621938 5110 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:16:41 crc kubenswrapper[5110]: E0122 14:16:41.713919 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:41 crc kubenswrapper[5110]: E0122 14:16:41.814322 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:41 crc kubenswrapper[5110]: E0122 14:16:41.914794 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.015291 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.116091 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.216671 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.317484 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.418521 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.458133 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.462910 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.462952 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.462962 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.462975 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.462985 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:42Z","lastTransitionTime":"2026-01-22T14:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.477006 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"852a491e-9e7b-4f26-a7f5-3ca241db6d4a\\\",\\\"systemUUID\\\":\\\"c7f66b8f-fb8d-43bb-91c4-80fc1b273d77\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.481556 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.481703 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.481745 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.481780 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.481806 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:42Z","lastTransitionTime":"2026-01-22T14:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.494342 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"852a491e-9e7b-4f26-a7f5-3ca241db6d4a\\\",\\\"systemUUID\\\":\\\"c7f66b8f-fb8d-43bb-91c4-80fc1b273d77\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.498642 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.498687 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.498714 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.498737 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.498752 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:42Z","lastTransitionTime":"2026-01-22T14:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.509809 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"852a491e-9e7b-4f26-a7f5-3ca241db6d4a\\\",\\\"systemUUID\\\":\\\"c7f66b8f-fb8d-43bb-91c4-80fc1b273d77\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.513486 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.513612 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.513708 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.513739 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:42 crc kubenswrapper[5110]: I0122 14:16:42.513760 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:42Z","lastTransitionTime":"2026-01-22T14:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.524608 5110 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"852a491e-9e7b-4f26-a7f5-3ca241db6d4a\\\",\\\"systemUUID\\\":\\\"c7f66b8f-fb8d-43bb-91c4-80fc1b273d77\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.524770 5110 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.524800 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.625574 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.726179 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.827159 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:42 crc kubenswrapper[5110]: E0122 14:16:42.927997 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:43 crc kubenswrapper[5110]: E0122 14:16:43.028859 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:43 crc kubenswrapper[5110]: E0122 14:16:43.129602 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:43 crc kubenswrapper[5110]: E0122 14:16:43.230779 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:43 crc kubenswrapper[5110]: E0122 14:16:43.331696 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:43 crc kubenswrapper[5110]: E0122 14:16:43.432196 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:43 crc kubenswrapper[5110]: E0122 14:16:43.532703 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:43 crc kubenswrapper[5110]: E0122 14:16:43.633704 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:43 crc kubenswrapper[5110]: E0122 14:16:43.734434 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:43 crc kubenswrapper[5110]: E0122 14:16:43.835124 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:43 crc kubenswrapper[5110]: E0122 14:16:43.935837 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:44 crc kubenswrapper[5110]: E0122 14:16:44.036248 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:44 crc kubenswrapper[5110]: E0122 14:16:44.136389 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:44 crc kubenswrapper[5110]: E0122 14:16:44.237499 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:44 crc kubenswrapper[5110]: E0122 14:16:44.338372 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:44 crc kubenswrapper[5110]: E0122 14:16:44.439547 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:44 crc kubenswrapper[5110]: E0122 14:16:44.540432 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:44 crc kubenswrapper[5110]: E0122 14:16:44.640546 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:44 crc kubenswrapper[5110]: E0122 14:16:44.741396 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:44 crc kubenswrapper[5110]: E0122 14:16:44.841934 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:44 crc kubenswrapper[5110]: E0122 14:16:44.943022 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:45 crc kubenswrapper[5110]: E0122 14:16:45.043865 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:45 crc kubenswrapper[5110]: E0122 14:16:45.144733 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:45 crc kubenswrapper[5110]: E0122 14:16:45.244949 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:45 crc kubenswrapper[5110]: I0122 14:16:45.272970 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:45 crc kubenswrapper[5110]: I0122 14:16:45.274344 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:45 crc kubenswrapper[5110]: I0122 14:16:45.274479 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:45 crc kubenswrapper[5110]: I0122 14:16:45.274553 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:45 crc kubenswrapper[5110]: E0122 14:16:45.275184 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:45 crc kubenswrapper[5110]: E0122 14:16:45.345974 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:45 crc kubenswrapper[5110]: E0122 14:16:45.447564 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:45 crc kubenswrapper[5110]: E0122 14:16:45.548268 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:45 crc kubenswrapper[5110]: E0122 14:16:45.649032 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:45 crc kubenswrapper[5110]: E0122 14:16:45.749598 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:45 crc kubenswrapper[5110]: E0122 14:16:45.849854 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:45 crc kubenswrapper[5110]: E0122 14:16:45.950052 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:46 crc kubenswrapper[5110]: E0122 14:16:46.050692 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:46 crc kubenswrapper[5110]: E0122 14:16:46.151242 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:46 crc kubenswrapper[5110]: E0122 14:16:46.251714 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:46 crc kubenswrapper[5110]: E0122 14:16:46.352338 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:46 crc kubenswrapper[5110]: E0122 14:16:46.452954 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:46 crc kubenswrapper[5110]: E0122 14:16:46.553878 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:46 crc kubenswrapper[5110]: E0122 14:16:46.655045 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:46 crc kubenswrapper[5110]: E0122 14:16:46.755664 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:46 crc kubenswrapper[5110]: E0122 14:16:46.856799 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:46 crc kubenswrapper[5110]: E0122 14:16:46.957502 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:47 crc kubenswrapper[5110]: E0122 14:16:47.058500 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:47 crc kubenswrapper[5110]: E0122 14:16:47.159127 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:47 crc kubenswrapper[5110]: E0122 14:16:47.259505 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:47 crc kubenswrapper[5110]: E0122 14:16:47.360554 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:47 crc kubenswrapper[5110]: E0122 14:16:47.461570 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:47 crc kubenswrapper[5110]: E0122 14:16:47.562039 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:47 crc kubenswrapper[5110]: E0122 14:16:47.662701 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:47 crc kubenswrapper[5110]: E0122 14:16:47.763518 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:47 crc kubenswrapper[5110]: E0122 14:16:47.863854 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:47 crc kubenswrapper[5110]: E0122 14:16:47.964510 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:48 crc kubenswrapper[5110]: E0122 14:16:48.065135 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:48 crc kubenswrapper[5110]: E0122 14:16:48.165928 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:48 crc kubenswrapper[5110]: E0122 14:16:48.266371 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.272800 5110 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.273742 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.273797 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.273814 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:48 crc kubenswrapper[5110]: E0122 14:16:48.274363 5110 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.274663 5110 scope.go:117] "RemoveContainer" containerID="d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07" Jan 22 14:16:48 crc kubenswrapper[5110]: E0122 14:16:48.274873 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:16:48 crc kubenswrapper[5110]: E0122 14:16:48.366932 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.392874 5110 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:16:48 crc kubenswrapper[5110]: E0122 14:16:48.467880 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:48 crc kubenswrapper[5110]: E0122 14:16:48.568154 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:48 crc kubenswrapper[5110]: E0122 14:16:48.669199 5110 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.707274 5110 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.771017 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.771076 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.771093 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.771116 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.771134 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:48Z","lastTransitionTime":"2026-01-22T14:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.791875 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.805497 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.873962 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.874034 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.874059 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.874091 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.874120 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:48Z","lastTransitionTime":"2026-01-22T14:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.907122 5110 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.908042 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.976721 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.976795 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.976811 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.976835 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:48 crc kubenswrapper[5110]: I0122 14:16:48.976851 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:48Z","lastTransitionTime":"2026-01-22T14:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.006364 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.079352 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.079613 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.079788 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.079918 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.080052 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:49Z","lastTransitionTime":"2026-01-22T14:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.108854 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.183036 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.183100 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.183118 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.183144 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.183167 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:49Z","lastTransitionTime":"2026-01-22T14:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.210191 5110 apiserver.go:52] "Watching apiserver" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.220244 5110 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.221077 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z","openshift-dns/node-resolver-c64tm","openshift-image-registry/node-ca-9ggqd","openshift-multus/multus-rj5zq","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-ms6jk","openshift-multus/multus-additional-cni-plugins-b7k8l","openshift-network-node-identity/network-node-identity-dgvkt","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-machine-config-operator/machine-config-daemon-grf5q","openshift-multus/network-metrics-daemon-js5pl"] Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.229057 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.228229 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.229189 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.229251 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.229601 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.232242 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.232405 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.232648 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.233740 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.235713 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.236439 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.236467 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.236467 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.236574 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.237213 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.238163 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.245751 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-c64tm" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.246533 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.246732 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.247985 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.248277 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.248404 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.254686 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.257251 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.257722 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.258032 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.258377 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.259002 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.260052 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.260563 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.268880 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9ggqd" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.269103 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.271894 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.272265 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.272777 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.273106 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.273358 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.273465 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.274316 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.274851 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.274980 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.275737 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.275793 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.276506 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.278512 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.280064 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.280174 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.280477 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.280967 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.281572 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.281757 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.285032 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.285385 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.285933 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.286014 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.285942 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.286263 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.286308 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.286326 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.286349 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.286368 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:49Z","lastTransitionTime":"2026-01-22T14:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.289962 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.290299 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-js5pl" podUID="455fa20f-c1d4-4086-8874-9526d4c4d24d" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.291775 5110 scope.go:117] "RemoveContainer" containerID="d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.291867 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.292027 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294116 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-run-netns\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294162 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/56adef60-8300-4815-a17b-19370e323339-system-cni-dir\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294203 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b3ccd93-5778-48f1-a454-e389c9019370-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-jjg8z\" (UID: \"5b3ccd93-5778-48f1-a454-e389c9019370\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294236 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b3ccd93-5778-48f1-a454-e389c9019370-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-jjg8z\" (UID: \"5b3ccd93-5778-48f1-a454-e389c9019370\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294267 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-systemd-units\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294298 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-node-log\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294326 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6327725f-9fd9-4ea3-b51f-3dfb27454a19-ovnkube-script-lib\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294355 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7jgz\" (UniqueName: \"kubernetes.io/projected/455fa20f-c1d4-4086-8874-9526d4c4d24d-kube-api-access-r7jgz\") pod \"network-metrics-daemon-js5pl\" (UID: \"455fa20f-c1d4-4086-8874-9526d4c4d24d\") " pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294384 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-var-lib-kubelet\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294411 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-cni-bin\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294436 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-cni-netd\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294465 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgkq4\" (UniqueName: \"kubernetes.io/projected/ebd2fbce-0bc4-4666-adab-0cb2648f026f-kube-api-access-kgkq4\") pod \"node-resolver-c64tm\" (UID: \"ebd2fbce-0bc4-4666-adab-0cb2648f026f\") " pod="openshift-dns/node-resolver-c64tm" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294493 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fzdc\" (UniqueName: \"kubernetes.io/projected/0f1f1e72-df44-4f16-87b5-a1ec3b2831c7-kube-api-access-5fzdc\") pod \"node-ca-9ggqd\" (UID: \"0f1f1e72-df44-4f16-87b5-a1ec3b2831c7\") " pod="openshift-image-registry/node-ca-9ggqd" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294518 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/56adef60-8300-4815-a17b-19370e323339-cnibin\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294544 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-run-systemd\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294579 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294610 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-system-cni-dir\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294708 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b3ccd93-5778-48f1-a454-e389c9019370-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-jjg8z\" (UID: \"5b3ccd93-5778-48f1-a454-e389c9019370\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294743 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-run-openvswitch\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294778 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-run-ovn-kubernetes\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294809 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6327725f-9fd9-4ea3-b51f-3dfb27454a19-ovn-node-metrics-cert\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294842 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294870 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9zmq\" (UniqueName: \"kubernetes.io/projected/a4b81444-a003-4b75-87dd-90ef7445dde3-kube-api-access-q9zmq\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294896 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/56adef60-8300-4815-a17b-19370e323339-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294924 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r9ss\" (UniqueName: \"kubernetes.io/projected/5b3ccd93-5778-48f1-a454-e389c9019370-kube-api-access-5r9ss\") pod \"ovnkube-control-plane-57b78d8988-jjg8z\" (UID: \"5b3ccd93-5778-48f1-a454-e389c9019370\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294949 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-log-socket\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.294972 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f1f1e72-df44-4f16-87b5-a1ec3b2831c7-serviceca\") pod \"node-ca-9ggqd\" (UID: \"0f1f1e72-df44-4f16-87b5-a1ec3b2831c7\") " pod="openshift-image-registry/node-ca-9ggqd" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295003 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295029 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6bfecfa4-ce38-4a92-a3dc-588176267b96-rootfs\") pod \"machine-config-daemon-grf5q\" (UID: \"6bfecfa4-ce38-4a92-a3dc-588176267b96\") " pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295057 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/56adef60-8300-4815-a17b-19370e323339-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295088 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295120 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295149 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-run-multus-certs\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295177 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/56adef60-8300-4815-a17b-19370e323339-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295257 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295287 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295312 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-multus-cni-dir\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295340 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-os-release\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295366 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-var-lib-cni-multus\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295394 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-slash\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295419 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295453 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-multus-socket-dir-parent\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295482 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-hostroot\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295508 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a4b81444-a003-4b75-87dd-90ef7445dde3-multus-daemon-config\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295534 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-kubelet\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295571 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-run-netns\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295600 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-etc-openvswitch\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295647 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-run-ovn\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295681 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295709 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295741 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-multus-conf-dir\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295770 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-etc-kubernetes\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295799 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6bfecfa4-ce38-4a92-a3dc-588176267b96-proxy-tls\") pod \"machine-config-daemon-grf5q\" (UID: \"6bfecfa4-ce38-4a92-a3dc-588176267b96\") " pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295826 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6bfecfa4-ce38-4a92-a3dc-588176267b96-mcd-auth-proxy-config\") pod \"machine-config-daemon-grf5q\" (UID: \"6bfecfa4-ce38-4a92-a3dc-588176267b96\") " pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.295873 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296042 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-run-k8s-cni-cncf-io\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296083 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp5ph\" (UniqueName: \"kubernetes.io/projected/6bfecfa4-ce38-4a92-a3dc-588176267b96-kube-api-access-mp5ph\") pod \"machine-config-daemon-grf5q\" (UID: \"6bfecfa4-ce38-4a92-a3dc-588176267b96\") " pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.296106 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296114 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/56adef60-8300-4815-a17b-19370e323339-cni-binary-copy\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296145 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z9bv\" (UniqueName: \"kubernetes.io/projected/56adef60-8300-4815-a17b-19370e323339-kube-api-access-4z9bv\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.296207 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:49.79615957 +0000 UTC m=+90.018243939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296233 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6327725f-9fd9-4ea3-b51f-3dfb27454a19-env-overrides\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296277 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2jhn\" (UniqueName: \"kubernetes.io/projected/6327725f-9fd9-4ea3-b51f-3dfb27454a19-kube-api-access-b2jhn\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296313 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296345 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs\") pod \"network-metrics-daemon-js5pl\" (UID: \"455fa20f-c1d4-4086-8874-9526d4c4d24d\") " pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296373 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a4b81444-a003-4b75-87dd-90ef7445dde3-cni-binary-copy\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296407 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-var-lib-cni-bin\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296437 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/56adef60-8300-4815-a17b-19370e323339-os-release\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296478 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-var-lib-openvswitch\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296506 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6327725f-9fd9-4ea3-b51f-3dfb27454a19-ovnkube-config\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296534 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ebd2fbce-0bc4-4666-adab-0cb2648f026f-hosts-file\") pod \"node-resolver-c64tm\" (UID: \"ebd2fbce-0bc4-4666-adab-0cb2648f026f\") " pod="openshift-dns/node-resolver-c64tm" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296568 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296595 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ebd2fbce-0bc4-4666-adab-0cb2648f026f-tmp-dir\") pod \"node-resolver-c64tm\" (UID: \"ebd2fbce-0bc4-4666-adab-0cb2648f026f\") " pod="openshift-dns/node-resolver-c64tm" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296659 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f1f1e72-df44-4f16-87b5-a1ec3b2831c7-host\") pod \"node-ca-9ggqd\" (UID: \"0f1f1e72-df44-4f16-87b5-a1ec3b2831c7\") " pod="openshift-image-registry/node-ca-9ggqd" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296696 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296854 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.296895 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-cnibin\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.297571 5110 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.297876 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.298606 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.298741 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:49.798716688 +0000 UTC m=+90.020801067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.299197 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.299355 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.306205 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.310769 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.311915 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-c64tm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebd2fbce-0bc4-4666-adab-0cb2648f026f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgkq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c64tm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.317860 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.317892 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.317905 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.317978 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:49.817958856 +0000 UTC m=+90.040043215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.324418 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.324463 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.324486 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.324584 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:49.82455635 +0000 UTC m=+90.046640729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.324567 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.326664 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.328134 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.329575 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.335145 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.344186 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.352686 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.360764 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b3ccd93-5778-48f1-a454-e389c9019370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r9ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r9ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:16:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-jjg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.369484 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c830b1a-9f8e-48d6-bc89-8f0d3123195e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://aaeea57cf421b20f0f956e67318a63fe34714ed959f1c539527cef6d4da220bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6257cd42c0aabd74cf1b4fb090dcc7f6042eff0cceb65fcfce3017475607d322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b9de386700dff95c5e03c1805a61ba1df0277684f7dcfc3f037b12d88e6fd06d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee03fe75796c5268a84b792e8f46e78c28e13dacdceac5b4f4d8c783fb2f789e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee03fe75796c5268a84b792e8f46e78c28e13dacdceac5b4f4d8c783fb2f789e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:15:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.378644 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.387869 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.387903 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.387917 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.387933 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.387946 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:49Z","lastTransitionTime":"2026-01-22T14:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.388192 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.390561 5110 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.396307 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bfecfa4-ce38-4a92-a3dc-588176267b96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mp5ph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:16:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grf5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.397250 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.397374 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.397511 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.397609 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.397726 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.397819 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.397913 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.398007 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.398253 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.398047 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.398300 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.398301 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.398774 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.398789 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.398730 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.399013 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.399035 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.399101 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.399130 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.399149 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.399167 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.399183 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.399742 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.399789 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.399906 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.400110 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.400221 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.400322 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.400415 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.400507 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.400596 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.400721 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.401487 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.400263 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.400499 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.400576 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.400886 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.401195 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.401425 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.401422 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.402060 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.402160 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.402250 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.402341 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.402426 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.402522 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.402072 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.402418 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.402577 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.402948 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403000 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.402611 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403141 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403236 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403248 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403244 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403314 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403346 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403368 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403387 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403407 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403428 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403263 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403449 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403253 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403486 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403495 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403535 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403564 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403587 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403611 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403651 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403675 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403740 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403766 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403793 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403820 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403844 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403867 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403890 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403913 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403934 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403955 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403983 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404006 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404029 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404052 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404073 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404094 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404120 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404142 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404164 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404189 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404210 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404231 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404251 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404275 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404297 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404317 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404339 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404362 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404395 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404427 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404460 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404566 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404604 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404665 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404689 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404713 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404735 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404757 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404780 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404802 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404824 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404846 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404868 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404894 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404959 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404983 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405008 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404986 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22125363-9afc-45ba-9f45-140a893ede80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0576c46d1c23047e597f77ca52369f79c30edccc5f819d2efbf1c4389bc97657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://744f292bdaf2a8b5cac207b241d045be99230cd860073c8e7b37c609136d2fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://744f292bdaf2a8b5cac207b241d045be99230cd860073c8e7b37c609136d2fcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:15:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405035 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405057 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405080 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405104 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403583 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403872 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.403949 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404317 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404536 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404576 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404835 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404949 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.404992 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405139 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405101 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405498 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405532 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405639 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405692 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405894 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.405965 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406003 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406117 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406178 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406220 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406255 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406290 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406325 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406364 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406397 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406494 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406526 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406527 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406558 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406646 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406682 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406710 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406739 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406766 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406792 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406817 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406840 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406864 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406888 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406913 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.407975 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408093 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408142 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408178 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408216 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408246 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408286 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408313 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408342 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408453 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408474 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408502 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408520 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408542 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408567 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408589 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408613 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408865 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408901 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408930 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408958 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408980 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409026 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409048 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409078 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409256 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409330 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409364 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409390 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409416 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409441 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409493 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409519 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409545 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409574 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409600 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409642 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409665 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409688 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409712 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409735 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409760 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409792 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409817 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409838 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409862 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409887 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409923 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409949 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409976 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410003 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410028 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410052 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410077 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410104 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410136 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410164 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410191 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410211 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410228 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410250 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410269 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410287 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410311 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410328 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410408 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410438 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410464 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410488 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410507 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410526 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410545 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410567 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410585 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410608 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410641 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410659 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410680 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410699 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410721 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410739 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410757 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410779 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410798 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410817 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410838 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410856 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410879 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.411912 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.411989 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412039 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412074 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412103 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412135 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412166 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412201 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412251 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412305 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412348 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412384 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412432 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412474 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412508 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412541 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412574 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412605 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412751 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-var-lib-openvswitch\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412785 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6327725f-9fd9-4ea3-b51f-3dfb27454a19-ovnkube-config\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412815 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ebd2fbce-0bc4-4666-adab-0cb2648f026f-hosts-file\") pod \"node-resolver-c64tm\" (UID: \"ebd2fbce-0bc4-4666-adab-0cb2648f026f\") " pod="openshift-dns/node-resolver-c64tm" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412843 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412875 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ebd2fbce-0bc4-4666-adab-0cb2648f026f-tmp-dir\") pod \"node-resolver-c64tm\" (UID: \"ebd2fbce-0bc4-4666-adab-0cb2648f026f\") " pod="openshift-dns/node-resolver-c64tm" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412901 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f1f1e72-df44-4f16-87b5-a1ec3b2831c7-host\") pod \"node-ca-9ggqd\" (UID: \"0f1f1e72-df44-4f16-87b5-a1ec3b2831c7\") " pod="openshift-image-registry/node-ca-9ggqd" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413051 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-cnibin\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413083 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-run-netns\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413109 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/56adef60-8300-4815-a17b-19370e323339-system-cni-dir\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413145 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b3ccd93-5778-48f1-a454-e389c9019370-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-jjg8z\" (UID: \"5b3ccd93-5778-48f1-a454-e389c9019370\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413172 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b3ccd93-5778-48f1-a454-e389c9019370-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-jjg8z\" (UID: \"5b3ccd93-5778-48f1-a454-e389c9019370\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413200 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-systemd-units\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413226 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-node-log\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413256 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6327725f-9fd9-4ea3-b51f-3dfb27454a19-ovnkube-script-lib\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413283 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r7jgz\" (UniqueName: \"kubernetes.io/projected/455fa20f-c1d4-4086-8874-9526d4c4d24d-kube-api-access-r7jgz\") pod \"network-metrics-daemon-js5pl\" (UID: \"455fa20f-c1d4-4086-8874-9526d4c4d24d\") " pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413306 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-var-lib-kubelet\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413325 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-cni-bin\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413343 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-cni-netd\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413364 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kgkq4\" (UniqueName: \"kubernetes.io/projected/ebd2fbce-0bc4-4666-adab-0cb2648f026f-kube-api-access-kgkq4\") pod \"node-resolver-c64tm\" (UID: \"ebd2fbce-0bc4-4666-adab-0cb2648f026f\") " pod="openshift-dns/node-resolver-c64tm" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406572 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406837 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413756 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.406992 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.407238 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.407249 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.407274 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.407499 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.407557 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.407581 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408041 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408237 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408386 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408610 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408832 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408866 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.408899 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409285 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409512 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409541 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409687 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409745 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.409911 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410020 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410078 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410362 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410382 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410389 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410846 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410895 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410961 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.410605 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.411460 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.411425 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.411485 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.411563 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.411693 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.411867 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412076 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.411929 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412135 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412759 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412795 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412853 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412882 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.412936 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413278 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413277 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413311 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413696 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.414107 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.414201 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.416197 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.416216 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.416816 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.417111 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.417178 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.417537 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.417548 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.418182 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.418220 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.418298 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.418320 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.418521 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.418822 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.418839 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.418848 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.418850 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.418999 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.419180 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.419228 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.419558 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:49.919535588 +0000 UTC m=+90.141619957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.419569 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.419671 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.419744 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.419872 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.420241 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.420431 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.420787 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.421187 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.421227 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.421447 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.421522 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.421942 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422123 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422192 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422443 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b3ccd93-5778-48f1-a454-e389c9019370-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-jjg8z\" (UID: \"5b3ccd93-5778-48f1-a454-e389c9019370\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422524 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422549 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ebd2fbce-0bc4-4666-adab-0cb2648f026f-tmp-dir\") pod \"node-resolver-c64tm\" (UID: \"ebd2fbce-0bc4-4666-adab-0cb2648f026f\") " pod="openshift-dns/node-resolver-c64tm" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422653 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ebd2fbce-0bc4-4666-adab-0cb2648f026f-hosts-file\") pod \"node-resolver-c64tm\" (UID: \"ebd2fbce-0bc4-4666-adab-0cb2648f026f\") " pod="openshift-dns/node-resolver-c64tm" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422566 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422579 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422669 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422739 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422773 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f1f1e72-df44-4f16-87b5-a1ec3b2831c7-host\") pod \"node-ca-9ggqd\" (UID: \"0f1f1e72-df44-4f16-87b5-a1ec3b2831c7\") " pod="openshift-image-registry/node-ca-9ggqd" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422817 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-cnibin\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422851 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-run-netns\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.422883 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/56adef60-8300-4815-a17b-19370e323339-system-cni-dir\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.423164 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.423169 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.423269 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.423336 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6327725f-9fd9-4ea3-b51f-3dfb27454a19-ovnkube-config\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.419079 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.423595 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.423611 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.423607 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.423593 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.419721 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-var-lib-openvswitch\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.423920 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.424114 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.424450 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.424489 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.424533 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-systemd-units\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.424584 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-node-log\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.424751 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.424818 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-var-lib-kubelet\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.424978 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425014 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425046 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425062 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-cni-bin\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.413809 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5fzdc\" (UniqueName: \"kubernetes.io/projected/0f1f1e72-df44-4f16-87b5-a1ec3b2831c7-kube-api-access-5fzdc\") pod \"node-ca-9ggqd\" (UID: \"0f1f1e72-df44-4f16-87b5-a1ec3b2831c7\") " pod="openshift-image-registry/node-ca-9ggqd" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425163 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/56adef60-8300-4815-a17b-19370e323339-cnibin\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425185 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-cni-netd\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425194 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-run-systemd\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425243 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/56adef60-8300-4815-a17b-19370e323339-cnibin\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425278 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-system-cni-dir\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425311 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b3ccd93-5778-48f1-a454-e389c9019370-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-jjg8z\" (UID: \"5b3ccd93-5778-48f1-a454-e389c9019370\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425346 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-run-openvswitch\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425375 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-run-ovn-kubernetes\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425400 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6327725f-9fd9-4ea3-b51f-3dfb27454a19-ovn-node-metrics-cert\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425431 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q9zmq\" (UniqueName: \"kubernetes.io/projected/a4b81444-a003-4b75-87dd-90ef7445dde3-kube-api-access-q9zmq\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425457 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/56adef60-8300-4815-a17b-19370e323339-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425496 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425522 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5r9ss\" (UniqueName: \"kubernetes.io/projected/5b3ccd93-5778-48f1-a454-e389c9019370-kube-api-access-5r9ss\") pod \"ovnkube-control-plane-57b78d8988-jjg8z\" (UID: \"5b3ccd93-5778-48f1-a454-e389c9019370\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425530 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6327725f-9fd9-4ea3-b51f-3dfb27454a19-ovnkube-script-lib\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425548 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-log-socket\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425580 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f1f1e72-df44-4f16-87b5-a1ec3b2831c7-serviceca\") pod \"node-ca-9ggqd\" (UID: \"0f1f1e72-df44-4f16-87b5-a1ec3b2831c7\") " pod="openshift-image-registry/node-ca-9ggqd" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425578 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-run-openvswitch\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425755 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6bfecfa4-ce38-4a92-a3dc-588176267b96-rootfs\") pod \"machine-config-daemon-grf5q\" (UID: \"6bfecfa4-ce38-4a92-a3dc-588176267b96\") " pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425789 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/56adef60-8300-4815-a17b-19370e323339-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425817 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425945 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-run-multus-certs\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.425999 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/56adef60-8300-4815-a17b-19370e323339-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426055 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-multus-cni-dir\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426074 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-run-multus-certs\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426088 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-os-release\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426120 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-var-lib-cni-multus\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426154 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426163 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-slash\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426190 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-slash\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426225 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-run-ovn-kubernetes\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426291 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-multus-socket-dir-parent\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426317 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-system-cni-dir\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426328 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-hostroot\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426371 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-multus-socket-dir-parent\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426398 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-hostroot\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426395 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a4b81444-a003-4b75-87dd-90ef7445dde3-multus-daemon-config\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426426 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-kubelet\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426446 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-run-netns\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426474 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-etc-openvswitch\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426500 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-run-ovn\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426531 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426558 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-multus-conf-dir\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426577 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-etc-kubernetes\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426583 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426601 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6bfecfa4-ce38-4a92-a3dc-588176267b96-proxy-tls\") pod \"machine-config-daemon-grf5q\" (UID: \"6bfecfa4-ce38-4a92-a3dc-588176267b96\") " pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426651 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6bfecfa4-ce38-4a92-a3dc-588176267b96-mcd-auth-proxy-config\") pod \"machine-config-daemon-grf5q\" (UID: \"6bfecfa4-ce38-4a92-a3dc-588176267b96\") " pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426700 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-run-k8s-cni-cncf-io\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426739 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mp5ph\" (UniqueName: \"kubernetes.io/projected/6bfecfa4-ce38-4a92-a3dc-588176267b96-kube-api-access-mp5ph\") pod \"machine-config-daemon-grf5q\" (UID: \"6bfecfa4-ce38-4a92-a3dc-588176267b96\") " pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426765 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/56adef60-8300-4815-a17b-19370e323339-cni-binary-copy\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426781 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-multus-cni-dir\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426791 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4z9bv\" (UniqueName: \"kubernetes.io/projected/56adef60-8300-4815-a17b-19370e323339-kube-api-access-4z9bv\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426821 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6327725f-9fd9-4ea3-b51f-3dfb27454a19-env-overrides\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426910 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b2jhn\" (UniqueName: \"kubernetes.io/projected/6327725f-9fd9-4ea3-b51f-3dfb27454a19-kube-api-access-b2jhn\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426925 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426945 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs\") pod \"network-metrics-daemon-js5pl\" (UID: \"455fa20f-c1d4-4086-8874-9526d4c4d24d\") " pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426972 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a4b81444-a003-4b75-87dd-90ef7445dde3-cni-binary-copy\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.426997 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-var-lib-cni-bin\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427072 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/56adef60-8300-4815-a17b-19370e323339-os-release\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427187 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-var-lib-cni-bin\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427284 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427338 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/56adef60-8300-4815-a17b-19370e323339-os-release\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427367 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-run-k8s-cni-cncf-io\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427339 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-kubelet\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427401 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-etc-openvswitch\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427423 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-run-ovn\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427448 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427471 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-multus-conf-dir\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427492 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427519 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-etc-kubernetes\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427870 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427962 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.427971 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a4b81444-a003-4b75-87dd-90ef7445dde3-multus-daemon-config\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.428145 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/56adef60-8300-4815-a17b-19370e323339-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.428203 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.428249 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.428615 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/56adef60-8300-4815-a17b-19370e323339-cni-binary-copy\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.429310 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.429856 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6327725f-9fd9-4ea3-b51f-3dfb27454a19-env-overrides\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.429928 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.430038 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.430346 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.430469 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.430635 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.430805 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.430818 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.431037 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.431094 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-run-systemd\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.431157 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.431225 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.431249 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-host-run-netns\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.431249 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-os-release\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.431538 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.431701 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.431811 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.431875 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.431915 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6bfecfa4-ce38-4a92-a3dc-588176267b96-rootfs\") pod \"machine-config-daemon-grf5q\" (UID: \"6bfecfa4-ce38-4a92-a3dc-588176267b96\") " pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432153 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/56adef60-8300-4815-a17b-19370e323339-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432192 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6327725f-9fd9-4ea3-b51f-3dfb27454a19-log-socket\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432226 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a4b81444-a003-4b75-87dd-90ef7445dde3-host-var-lib-cni-multus\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.432303 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs podName:455fa20f-c1d4-4086-8874-9526d4c4d24d nodeName:}" failed. No retries permitted until 2026-01-22 14:16:49.932283115 +0000 UTC m=+90.154367554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs") pod "network-metrics-daemon-js5pl" (UID: "455fa20f-c1d4-4086-8874-9526d4c4d24d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432473 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/56adef60-8300-4815-a17b-19370e323339-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432587 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432606 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432709 5110 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432729 5110 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432744 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432757 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432769 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432785 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432800 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432813 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432825 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432840 5110 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432854 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432868 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432882 5110 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432895 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432909 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432925 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432938 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432952 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432967 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432981 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.432994 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433007 5110 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433021 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433034 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433047 5110 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433062 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433077 5110 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433092 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433106 5110 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433120 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433133 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433145 5110 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433160 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433172 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433185 5110 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433199 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433213 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433225 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433237 5110 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433250 5110 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433262 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433273 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433286 5110 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433299 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433312 5110 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433322 5110 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433334 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433345 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433358 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433585 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a4b81444-a003-4b75-87dd-90ef7445dde3-cni-binary-copy\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.433997 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434022 5110 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434034 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434049 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434062 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434075 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434088 5110 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434102 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434137 5110 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434252 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434267 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434280 5110 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434294 5110 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434308 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434306 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434321 5110 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434338 5110 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434351 5110 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434349 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434365 5110 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434378 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434391 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434404 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434415 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434427 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434439 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434451 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434465 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434477 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434491 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434502 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434515 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434526 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434537 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434554 5110 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434566 5110 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434577 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434590 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434603 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434616 5110 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434644 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434658 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434783 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434799 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.434817 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435072 5110 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435093 5110 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435184 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435230 5110 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435246 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435259 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435272 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435285 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435298 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435311 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435312 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435477 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435494 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435506 5110 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435519 5110 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435523 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6bfecfa4-ce38-4a92-a3dc-588176267b96-mcd-auth-proxy-config\") pod \"machine-config-daemon-grf5q\" (UID: \"6bfecfa4-ce38-4a92-a3dc-588176267b96\") " pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435532 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435545 5110 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435561 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435576 5110 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435588 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435601 5110 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435653 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435700 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435827 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435873 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435729 5110 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435920 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435933 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435946 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435959 5110 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435972 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.435986 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436000 5110 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436011 5110 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436022 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436034 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436048 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436060 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436072 5110 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436085 5110 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436098 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436110 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436125 5110 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436138 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436150 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436162 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436175 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436188 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436202 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436214 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436228 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436241 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436254 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436266 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436283 5110 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436299 5110 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436313 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436327 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436340 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436337 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436352 5110 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436408 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436425 5110 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.436443 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.438008 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fzdc\" (UniqueName: \"kubernetes.io/projected/0f1f1e72-df44-4f16-87b5-a1ec3b2831c7-kube-api-access-5fzdc\") pod \"node-ca-9ggqd\" (UID: \"0f1f1e72-df44-4f16-87b5-a1ec3b2831c7\") " pod="openshift-image-registry/node-ca-9ggqd" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.439145 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f1f1e72-df44-4f16-87b5-a1ec3b2831c7-serviceca\") pod \"node-ca-9ggqd\" (UID: \"0f1f1e72-df44-4f16-87b5-a1ec3b2831c7\") " pod="openshift-image-registry/node-ca-9ggqd" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.439242 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.439385 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-c64tm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebd2fbce-0bc4-4666-adab-0cb2648f026f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kgkq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:16:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c64tm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.443273 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.443649 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.444110 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.446691 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6bfecfa4-ce38-4a92-a3dc-588176267b96-proxy-tls\") pod \"machine-config-daemon-grf5q\" (UID: \"6bfecfa4-ce38-4a92-a3dc-588176267b96\") " pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.447222 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.447334 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6327725f-9fd9-4ea3-b51f-3dfb27454a19-ovn-node-metrics-cert\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.447961 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.448000 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b3ccd93-5778-48f1-a454-e389c9019370-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-jjg8z\" (UID: \"5b3ccd93-5778-48f1-a454-e389c9019370\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.448229 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b3ccd93-5778-48f1-a454-e389c9019370-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-jjg8z\" (UID: \"5b3ccd93-5778-48f1-a454-e389c9019370\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.448288 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.448642 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.449828 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.449976 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.450143 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.450381 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.450463 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.450517 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.450521 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.450549 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.451266 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgkq4\" (UniqueName: \"kubernetes.io/projected/ebd2fbce-0bc4-4666-adab-0cb2648f026f-kube-api-access-kgkq4\") pod \"node-resolver-c64tm\" (UID: \"ebd2fbce-0bc4-4666-adab-0cb2648f026f\") " pod="openshift-dns/node-resolver-c64tm" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.451285 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z9bv\" (UniqueName: \"kubernetes.io/projected/56adef60-8300-4815-a17b-19370e323339-kube-api-access-4z9bv\") pod \"multus-additional-cni-plugins-b7k8l\" (UID: \"56adef60-8300-4815-a17b-19370e323339\") " pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.451368 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.451848 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.452134 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.451921 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.452316 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.452356 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2jhn\" (UniqueName: \"kubernetes.io/projected/6327725f-9fd9-4ea3-b51f-3dfb27454a19-kube-api-access-b2jhn\") pod \"ovnkube-node-ms6jk\" (UID: \"6327725f-9fd9-4ea3-b51f-3dfb27454a19\") " pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.452442 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9zmq\" (UniqueName: \"kubernetes.io/projected/a4b81444-a003-4b75-87dd-90ef7445dde3-kube-api-access-q9zmq\") pod \"multus-rj5zq\" (UID: \"a4b81444-a003-4b75-87dd-90ef7445dde3\") " pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.452694 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.452704 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.452788 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.452893 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.453054 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56adef60-8300-4815-a17b-19370e323339\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z9bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z9bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z9bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z9bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z9bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z9bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4z9bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:16:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b7k8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.453554 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.454022 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r9ss\" (UniqueName: \"kubernetes.io/projected/5b3ccd93-5778-48f1-a454-e389c9019370-kube-api-access-5r9ss\") pod \"ovnkube-control-plane-57b78d8988-jjg8z\" (UID: \"5b3ccd93-5778-48f1-a454-e389c9019370\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.454171 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7jgz\" (UniqueName: \"kubernetes.io/projected/455fa20f-c1d4-4086-8874-9526d4c4d24d-kube-api-access-r7jgz\") pod \"network-metrics-daemon-js5pl\" (UID: \"455fa20f-c1d4-4086-8874-9526d4c4d24d\") " pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.454458 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.454548 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.454840 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.455775 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.456020 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp5ph\" (UniqueName: \"kubernetes.io/projected/6bfecfa4-ce38-4a92-a3dc-588176267b96-kube-api-access-mp5ph\") pod \"machine-config-daemon-grf5q\" (UID: \"6bfecfa4-ce38-4a92-a3dc-588176267b96\") " pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.456385 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.456508 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.456549 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.456611 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.456780 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.457082 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.457150 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.457189 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.457261 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.463895 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.466152 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.466864 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.471958 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b3ccd93-5778-48f1-a454-e389c9019370\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r9ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5r9ss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:16:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-jjg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.481512 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.482765 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-rj5zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b81444-a003-4b75-87dd-90ef7445dde3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9zmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:16:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rj5zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.484765 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.490971 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.491020 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.491031 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.491048 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.491059 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:49Z","lastTransitionTime":"2026-01-22T14:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.496431 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6327725f-9fd9-4ea3-b51f-3dfb27454a19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2jhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2jhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2jhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2jhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2jhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2jhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2jhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2jhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b2jhn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:16:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ms6jk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.506473 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9ggqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f1f1e72-df44-4f16-87b5-a1ec3b2831c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5fzdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:16:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9ggqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.527696 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea94f68f-2af5-4a02-8454-4a89c66fb182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://1cd451579a994467439d569d9193e371a2f195c28e0cf59bf1cea771afb93f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4218965e480ae0724ffb2ed0046da5c13adc9775781cd6655e4eee11b991bcbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d96e4d28cb0a0172ec7bca9bb0b8abd60d4408d5162fc68cb60aa76ee29be139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7c91988ce5d51e9bfdfbb8f42af44508dc734056eba57a8e902f3ef0fa9700b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://78889d8bd043c44ad144a0c7886373ae1c52c095a91f6c24a6e2f1482d735c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b8ec4f1e07794a08179e3f4d19be5c925b4f89bf9f1189111c866e26ac045614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec4f1e07794a08179e3f4d19be5c925b4f89bf9f1189111c866e26ac045614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:15:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://5ea2ef2acbf2852d0bf69defad220c1eddd4f00f226569fe9ae074a5dadb8c80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea2ef2acbf2852d0bf69defad220c1eddd4f00f226569fe9ae074a5dadb8c80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:15:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://91be0b9f5661930cd4f206e761c2313265d51ba016a5729ff1331f6f6a4d894e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91be0b9f5661930cd4f206e761c2313265d51ba016a5729ff1331f6f6a4d894e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:15:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.538022 5110 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.538271 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.538396 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.538469 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.538551 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.538607 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.538684 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.538761 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.538840 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.538901 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.538953 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.539006 5110 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.539056 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.539112 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.539168 5110 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.539230 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.539287 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.539344 5110 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.539404 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.539462 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.539526 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.539449 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"869455b8-f444-4ee3-9a9a-c737007425b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T14:16:27Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 14:16:27.350251 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 14:16:27.350384 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 14:16:27.351217 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1228718129/tls.crt::/tmp/serving-cert-1228718129/tls.key\\\\\\\"\\\\nI0122 14:16:27.934961 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 14:16:27.938825 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 14:16:27.938844 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 14:16:27.938871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 14:16:27.938879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 14:16:27.942198 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 14:16:27.942238 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 14:16:27.942244 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 14:16:27.942250 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 14:16:27.942252 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 14:16:27.942256 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 14:16:27.942259 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 14:16:27.942333 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 14:16:27.943705 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T14:16:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:23Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T14:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T14:15:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.539589 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540231 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540262 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540282 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540318 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540337 5110 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540351 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540367 5110 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540387 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540401 5110 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540448 5110 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540472 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540489 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540521 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540534 5110 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540547 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540560 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540575 5110 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540586 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540596 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540612 5110 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540640 5110 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540654 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540665 5110 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540678 5110 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540689 5110 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540700 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540713 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540728 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540739 5110 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540749 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540762 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540774 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540787 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540797 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540811 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540824 5110 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540836 5110 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540846 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540857 5110 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540867 5110 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540877 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540886 5110 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540901 5110 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.540923 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.553944 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1db11091-c5d8-4dd9-8dd9-3ff06542b6ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1459b9c41b7d6069ce5b8517276b497b11aff1bc62b8073f3a370905ad2e3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d58e05af32372135da721ce73766f6ea10f366c5c1b13a318803ff65725c882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:21Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2b3c4fd9d2080630f69cecbc801ceda2e4e82ce9e46e4c1e77cb0e704ff57d63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df1ef784b1eb41885d2c53e0b4af0912047f8558bc39a160c44da724182b8997\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T14:15:22Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:15:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.555009 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.564347 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.573247 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.574862 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.584188 5110 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-js5pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"455fa20f-c1d4-4086-8874-9526d4c4d24d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T14:16:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7jgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7jgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T14:16:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-js5pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.589145 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 14:16:49 crc kubenswrapper[5110]: W0122 14:16:49.589285 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-796f5799636c95416a7a555f5b02486163e187cf2396416b963be1f4e286f4dd WatchSource:0}: Error finding container 796f5799636c95416a7a555f5b02486163e187cf2396416b963be1f4e286f4dd: Status 404 returned error can't find the container with id 796f5799636c95416a7a555f5b02486163e187cf2396416b963be1f4e286f4dd Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.592970 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.592997 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.593008 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.593022 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.593035 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:49Z","lastTransitionTime":"2026-01-22T14:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.600995 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-c64tm" Jan 22 14:16:49 crc kubenswrapper[5110]: W0122 14:16:49.621507 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebd2fbce_0bc4_4666_adab_0cb2648f026f.slice/crio-86d7426af401138b8ac2145fa8e2e25efeaf480c235ee4512792dd786c59015b WatchSource:0}: Error finding container 86d7426af401138b8ac2145fa8e2e25efeaf480c235ee4512792dd786c59015b: Status 404 returned error can't find the container with id 86d7426af401138b8ac2145fa8e2e25efeaf480c235ee4512792dd786c59015b Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.632297 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-c64tm" event={"ID":"ebd2fbce-0bc4-4666-adab-0cb2648f026f","Type":"ContainerStarted","Data":"86d7426af401138b8ac2145fa8e2e25efeaf480c235ee4512792dd786c59015b"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.633280 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"71d29bf1115caf4aa8f08bb3f52eb76a7303155a5c0ece6d85d6a37f1b8c83f9"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.634538 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"796f5799636c95416a7a555f5b02486163e187cf2396416b963be1f4e286f4dd"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.635415 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"2072ae98dc3833ab68eb834778a55dc7bba1472824874b837b2f49a206723104"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.646682 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.653239 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9ggqd" Jan 22 14:16:49 crc kubenswrapper[5110]: W0122 14:16:49.662003 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b3ccd93_5778_48f1_a454_e389c9019370.slice/crio-4ad20db3d56d28dcb268657b0d4cc208cedc4e0798e782f2b3b01d648f9f4ca5 WatchSource:0}: Error finding container 4ad20db3d56d28dcb268657b0d4cc208cedc4e0798e782f2b3b01d648f9f4ca5: Status 404 returned error can't find the container with id 4ad20db3d56d28dcb268657b0d4cc208cedc4e0798e782f2b3b01d648f9f4ca5 Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.662156 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rj5zq" Jan 22 14:16:49 crc kubenswrapper[5110]: W0122 14:16:49.663034 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f1f1e72_df44_4f16_87b5_a1ec3b2831c7.slice/crio-d1ed9c3f7e448f9c9128a0cd226dd1cabec7fcbfa34bfe1f54b09f4a28035400 WatchSource:0}: Error finding container d1ed9c3f7e448f9c9128a0cd226dd1cabec7fcbfa34bfe1f54b09f4a28035400: Status 404 returned error can't find the container with id d1ed9c3f7e448f9c9128a0cd226dd1cabec7fcbfa34bfe1f54b09f4a28035400 Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.667731 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.676796 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" Jan 22 14:16:49 crc kubenswrapper[5110]: W0122 14:16:49.678870 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b81444_a003_4b75_87dd_90ef7445dde3.slice/crio-a97de43f42603eb316739b6630d34b4e6b7a913ee8a1ee4a3bc50a8634f60d2c WatchSource:0}: Error finding container a97de43f42603eb316739b6630d34b4e6b7a913ee8a1ee4a3bc50a8634f60d2c: Status 404 returned error can't find the container with id a97de43f42603eb316739b6630d34b4e6b7a913ee8a1ee4a3bc50a8634f60d2c Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.690345 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.694181 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.694431 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.694442 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.694454 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.694465 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:49Z","lastTransitionTime":"2026-01-22T14:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.796857 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.796898 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.796908 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.796922 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.796931 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:49Z","lastTransitionTime":"2026-01-22T14:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.847480 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.847544 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.847565 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.847586 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.847714 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.847725 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.847769 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:50.847755857 +0000 UTC m=+91.069840216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.847780 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:50.847774997 +0000 UTC m=+91.069859356 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.847800 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.847814 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.847827 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.847858 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:50.847848099 +0000 UTC m=+91.069932458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.847908 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.847920 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.847928 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.847954 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:50.847945682 +0000 UTC m=+91.070030041 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.898852 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.898886 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.898894 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.898907 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.898915 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:49Z","lastTransitionTime":"2026-01-22T14:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.947973 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:49 crc kubenswrapper[5110]: I0122 14:16:49.948114 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs\") pod \"network-metrics-daemon-js5pl\" (UID: \"455fa20f-c1d4-4086-8874-9526d4c4d24d\") " pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.948232 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.948309 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:50.948264761 +0000 UTC m=+91.170349120 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:49 crc kubenswrapper[5110]: E0122 14:16:49.948347 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs podName:455fa20f-c1d4-4086-8874-9526d4c4d24d nodeName:}" failed. No retries permitted until 2026-01-22 14:16:50.948337253 +0000 UTC m=+91.170421612 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs") pod "network-metrics-daemon-js5pl" (UID: "455fa20f-c1d4-4086-8874-9526d4c4d24d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.001504 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.001540 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.001552 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.001566 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.001577 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:50Z","lastTransitionTime":"2026-01-22T14:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.103782 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.103841 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.103851 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.103865 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.103873 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:50Z","lastTransitionTime":"2026-01-22T14:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.205712 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.205757 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.205769 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.205787 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.205799 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:50Z","lastTransitionTime":"2026-01-22T14:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.277192 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.278255 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.279928 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.281099 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.283437 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.288376 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.293040 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.294711 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.295573 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.297088 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.298081 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.299895 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.300833 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.302998 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.303542 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.305060 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.308277 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.309311 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.310633 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.311587 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.312995 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.314410 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.315553 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.316396 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.316536 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.316700 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.316730 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.316786 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.316805 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:50Z","lastTransitionTime":"2026-01-22T14:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.317721 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.318589 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.319732 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.320524 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.323772 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.324263 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.325777 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.326639 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.328019 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.329158 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.329808 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.330392 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.332351 5110 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.332497 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.336234 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.338134 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.339506 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.341674 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.342407 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.344317 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.346853 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.347349 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.348504 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.349452 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.350684 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.351333 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.352396 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.353030 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.354186 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.355098 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.356496 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.357188 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.358472 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.359319 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.408518 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.408503664 podStartE2EDuration="1.408503664s" podCreationTimestamp="2026-01-22 14:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:50.408456093 +0000 UTC m=+90.630540452" watchObservedRunningTime="2026-01-22 14:16:50.408503664 +0000 UTC m=+90.630588013" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.423099 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.423163 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.423177 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.423196 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.423210 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:50Z","lastTransitionTime":"2026-01-22T14:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.463409 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.463388283 podStartE2EDuration="2.463388283s" podCreationTimestamp="2026-01-22 14:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:50.447919865 +0000 UTC m=+90.670004254" watchObservedRunningTime="2026-01-22 14:16:50.463388283 +0000 UTC m=+90.685472652" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.512945 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.512928762 podStartE2EDuration="1.512928762s" podCreationTimestamp="2026-01-22 14:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:50.512773898 +0000 UTC m=+90.734858287" watchObservedRunningTime="2026-01-22 14:16:50.512928762 +0000 UTC m=+90.735013111" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.525674 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.525719 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.525733 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.525750 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.525764 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:50Z","lastTransitionTime":"2026-01-22T14:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.582294 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.582271613 podStartE2EDuration="1.582271613s" podCreationTimestamp="2026-01-22 14:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:50.581813241 +0000 UTC m=+90.803897610" watchObservedRunningTime="2026-01-22 14:16:50.582271613 +0000 UTC m=+90.804355972" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.627138 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.627203 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.627222 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.627244 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.627261 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:50Z","lastTransitionTime":"2026-01-22T14:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.640214 5110 generic.go:358] "Generic (PLEG): container finished" podID="6327725f-9fd9-4ea3-b51f-3dfb27454a19" containerID="4232716b6dec4eb0770af7f6fe58c5485bd09ff8ac54130334221bc55497b479" exitCode=0 Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.640323 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" event={"ID":"6327725f-9fd9-4ea3-b51f-3dfb27454a19","Type":"ContainerDied","Data":"4232716b6dec4eb0770af7f6fe58c5485bd09ff8ac54130334221bc55497b479"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.640374 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" event={"ID":"6327725f-9fd9-4ea3-b51f-3dfb27454a19","Type":"ContainerStarted","Data":"3809fbcfe65fca38463a6ad6ae1209621882853e410ff2d576f38defe3c0d101"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.641927 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9ggqd" event={"ID":"0f1f1e72-df44-4f16-87b5-a1ec3b2831c7","Type":"ContainerStarted","Data":"f9624a053023c831360187e363e4e733c77e886a4ae62c94eb892c1ad5e81315"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.641983 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9ggqd" event={"ID":"0f1f1e72-df44-4f16-87b5-a1ec3b2831c7","Type":"ContainerStarted","Data":"d1ed9c3f7e448f9c9128a0cd226dd1cabec7fcbfa34bfe1f54b09f4a28035400"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.643296 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"dee7cb589ae2a98a1284026d270e07bbcab1012b6e0ddebdce2c86c81c6af657"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.645253 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" event={"ID":"5b3ccd93-5778-48f1-a454-e389c9019370","Type":"ContainerStarted","Data":"d14f6b3891fee127fc248e7d5cc91e2d380e848b4aff39fe6b8fdaf2d6ce0662"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.645300 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" event={"ID":"5b3ccd93-5778-48f1-a454-e389c9019370","Type":"ContainerStarted","Data":"ea0f42173226e7aec6b6e31cf90a979c101b399e802ac49d46e9bbf9909edf6a"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.645312 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" event={"ID":"5b3ccd93-5778-48f1-a454-e389c9019370","Type":"ContainerStarted","Data":"4ad20db3d56d28dcb268657b0d4cc208cedc4e0798e782f2b3b01d648f9f4ca5"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.646926 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-c64tm" event={"ID":"ebd2fbce-0bc4-4666-adab-0cb2648f026f","Type":"ContainerStarted","Data":"52382b8e45fc052ae583e0ee8c7da036976caa1e5ebfc84192969c0f61947d62"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.648478 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" event={"ID":"6bfecfa4-ce38-4a92-a3dc-588176267b96","Type":"ContainerStarted","Data":"9db9c8abacb49bb53df15031b755599e37ef94a9488df96008b028e43980539d"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.648504 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" event={"ID":"6bfecfa4-ce38-4a92-a3dc-588176267b96","Type":"ContainerStarted","Data":"6ecaf8dd09571a6f4b5924d1e4734e6a3c16b4eff3df4258d7238252038bcd11"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.648513 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" event={"ID":"6bfecfa4-ce38-4a92-a3dc-588176267b96","Type":"ContainerStarted","Data":"9c2e37acad1696faae38539f1a53e3bd299a7696121cd111e1c63cf8f57b49a8"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.650037 5110 generic.go:358] "Generic (PLEG): container finished" podID="56adef60-8300-4815-a17b-19370e323339" containerID="bdd8c8dd3908fc30caa8afae8480e0baeda3507d0db517f5bea718faa3d46bc7" exitCode=0 Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.650167 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" event={"ID":"56adef60-8300-4815-a17b-19370e323339","Type":"ContainerDied","Data":"bdd8c8dd3908fc30caa8afae8480e0baeda3507d0db517f5bea718faa3d46bc7"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.650191 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" event={"ID":"56adef60-8300-4815-a17b-19370e323339","Type":"ContainerStarted","Data":"d8c51f2b3e0eca23d1cd5dd8d97d0a1fcb6584f48371ec8c85b7162325c1412b"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.651527 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rj5zq" event={"ID":"a4b81444-a003-4b75-87dd-90ef7445dde3","Type":"ContainerStarted","Data":"211eac839a15e8c10f8363a3cea314528d2f90857eed53d5aeb043d62799f321"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.651554 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rj5zq" event={"ID":"a4b81444-a003-4b75-87dd-90ef7445dde3","Type":"ContainerStarted","Data":"a97de43f42603eb316739b6630d34b4e6b7a913ee8a1ee4a3bc50a8634f60d2c"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.653236 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"732bc3c9f1cf99790655fd56cc6897d4b716c212969fecdaa47988a131b690ea"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.653271 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"6558ca939cd3f4ca722018ef329a71245cf46ffc8a21322a8975d4b9d2b11941"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.707066 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podStartSLOduration=70.707042118 podStartE2EDuration="1m10.707042118s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:50.706982556 +0000 UTC m=+90.929066925" watchObservedRunningTime="2026-01-22 14:16:50.707042118 +0000 UTC m=+90.929126477" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.720091 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-c64tm" podStartSLOduration=70.720074252 podStartE2EDuration="1m10.720074252s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:50.719840416 +0000 UTC m=+90.941924775" watchObservedRunningTime="2026-01-22 14:16:50.720074252 +0000 UTC m=+90.942158611" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.730005 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.730044 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.730055 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.730070 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.730081 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:50Z","lastTransitionTime":"2026-01-22T14:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.775336 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-jjg8z" podStartSLOduration=70.775318701 podStartE2EDuration="1m10.775318701s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:50.757395367 +0000 UTC m=+90.979479756" watchObservedRunningTime="2026-01-22 14:16:50.775318701 +0000 UTC m=+90.997403060" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.775490 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rj5zq" podStartSLOduration=70.775487485 podStartE2EDuration="1m10.775487485s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:50.775213188 +0000 UTC m=+90.997297557" watchObservedRunningTime="2026-01-22 14:16:50.775487485 +0000 UTC m=+90.997571834" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.787843 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9ggqd" podStartSLOduration=70.787823041 podStartE2EDuration="1m10.787823041s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:50.787443101 +0000 UTC m=+91.009527470" watchObservedRunningTime="2026-01-22 14:16:50.787823041 +0000 UTC m=+91.009907410" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.834003 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.834060 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.834077 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.834100 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.834116 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:50Z","lastTransitionTime":"2026-01-22T14:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.858584 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.858661 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.858692 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.858721 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.858858 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.858925 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.858979 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:52.85896171 +0000 UTC m=+93.081046069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.859336 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.859373 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:52.85936202 +0000 UTC m=+93.081446379 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.859431 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.859441 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.859452 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.859482 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:52.859473053 +0000 UTC m=+93.081557412 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.859499 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.859509 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.859535 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:52.859527825 +0000 UTC m=+93.081612184 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.935723 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.935774 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.935788 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.935806 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.935818 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:50Z","lastTransitionTime":"2026-01-22T14:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.959189 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:50 crc kubenswrapper[5110]: I0122 14:16:50.959312 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs\") pod \"network-metrics-daemon-js5pl\" (UID: \"455fa20f-c1d4-4086-8874-9526d4c4d24d\") " pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.959488 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.959515 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:52.959470474 +0000 UTC m=+93.181554833 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:50 crc kubenswrapper[5110]: E0122 14:16:50.959569 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs podName:455fa20f-c1d4-4086-8874-9526d4c4d24d nodeName:}" failed. No retries permitted until 2026-01-22 14:16:52.959556366 +0000 UTC m=+93.181640725 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs") pod "network-metrics-daemon-js5pl" (UID: "455fa20f-c1d4-4086-8874-9526d4c4d24d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.038469 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.038839 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.038853 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.038870 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.038883 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:51Z","lastTransitionTime":"2026-01-22T14:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.143984 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.144138 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.144223 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.144384 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.144562 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:51Z","lastTransitionTime":"2026-01-22T14:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.246082 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.246116 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.246125 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.246140 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.246150 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:51Z","lastTransitionTime":"2026-01-22T14:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.272831 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:51 crc kubenswrapper[5110]: E0122 14:16:51.272959 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.273346 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:51 crc kubenswrapper[5110]: E0122 14:16:51.273411 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.273477 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:51 crc kubenswrapper[5110]: E0122 14:16:51.273531 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-js5pl" podUID="455fa20f-c1d4-4086-8874-9526d4c4d24d" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.273645 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:51 crc kubenswrapper[5110]: E0122 14:16:51.273764 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.348596 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.348660 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.348670 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.348687 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.348698 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:51Z","lastTransitionTime":"2026-01-22T14:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.453397 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.453434 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.453443 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.453457 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.453466 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:51Z","lastTransitionTime":"2026-01-22T14:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.556646 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.556719 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.556731 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.556751 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.556781 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:51Z","lastTransitionTime":"2026-01-22T14:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.661401 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.661845 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.661872 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.661904 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.661928 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:51Z","lastTransitionTime":"2026-01-22T14:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.700604 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" event={"ID":"56adef60-8300-4815-a17b-19370e323339","Type":"ContainerStarted","Data":"01e47f30e15b2cafd462919d8f27c49ce9ddf2217dceda3e9f840062dcb3f9ac"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.705606 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" event={"ID":"6327725f-9fd9-4ea3-b51f-3dfb27454a19","Type":"ContainerStarted","Data":"fe8c38b41da23c7c311771ec29f1a279b2f541460ec61d0da028a13ba76ec52b"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.705729 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" event={"ID":"6327725f-9fd9-4ea3-b51f-3dfb27454a19","Type":"ContainerStarted","Data":"f4c3719bc15b0f464a0a3234aff594679f7dcec7f8ee78545d6d64a34300f638"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.705788 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" event={"ID":"6327725f-9fd9-4ea3-b51f-3dfb27454a19","Type":"ContainerStarted","Data":"e65e3233987dea0e0bfb8a1904ffc8bedd67516a1dcb40b69473f4fec15bf235"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.705804 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" event={"ID":"6327725f-9fd9-4ea3-b51f-3dfb27454a19","Type":"ContainerStarted","Data":"87c41a1fd779ab447c5550bf937387a8d972fd3a0f61fbd9c59f884276d05563"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.766669 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.766732 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.766741 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.766754 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.766763 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:51Z","lastTransitionTime":"2026-01-22T14:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.868930 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.868976 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.868988 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.869006 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.869018 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:51Z","lastTransitionTime":"2026-01-22T14:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.971478 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.971922 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.972069 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.972212 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:51 crc kubenswrapper[5110]: I0122 14:16:51.972444 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:51Z","lastTransitionTime":"2026-01-22T14:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.075099 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.075153 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.075166 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.075183 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.075195 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:52Z","lastTransitionTime":"2026-01-22T14:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.177210 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.177261 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.177277 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.177295 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.177311 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:52Z","lastTransitionTime":"2026-01-22T14:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.279444 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.279525 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.279555 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.279586 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.279609 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:52Z","lastTransitionTime":"2026-01-22T14:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.381811 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.381851 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.381860 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.381872 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.381881 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:52Z","lastTransitionTime":"2026-01-22T14:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.483593 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.483650 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.483661 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.483673 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.483683 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:52Z","lastTransitionTime":"2026-01-22T14:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.585245 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.585322 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.585343 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.585369 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.585390 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:52Z","lastTransitionTime":"2026-01-22T14:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.687582 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.687658 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.687673 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.687693 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.687708 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:52Z","lastTransitionTime":"2026-01-22T14:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.708423 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.708490 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.708504 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.708523 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.708536 5110 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T14:16:52Z","lastTransitionTime":"2026-01-22T14:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.712392 5110 generic.go:358] "Generic (PLEG): container finished" podID="56adef60-8300-4815-a17b-19370e323339" containerID="01e47f30e15b2cafd462919d8f27c49ce9ddf2217dceda3e9f840062dcb3f9ac" exitCode=0 Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.712467 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" event={"ID":"56adef60-8300-4815-a17b-19370e323339","Type":"ContainerDied","Data":"01e47f30e15b2cafd462919d8f27c49ce9ddf2217dceda3e9f840062dcb3f9ac"} Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.717225 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" event={"ID":"6327725f-9fd9-4ea3-b51f-3dfb27454a19","Type":"ContainerStarted","Data":"49627282d030c5f9db1c371ace68561ef973a1495f2395ca94523846384c65b8"} Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.717267 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" event={"ID":"6327725f-9fd9-4ea3-b51f-3dfb27454a19","Type":"ContainerStarted","Data":"fe9a572a711d620b4347d6cf83a25352324d52c5780d3f6686e0ce789c28e7f8"} Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.766374 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t"] Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.773003 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.775181 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.775221 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.775496 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.776578 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.883733 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f940b104-3153-4a29-848f-af7f293796a3-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.883770 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f940b104-3153-4a29-848f-af7f293796a3-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.883788 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f940b104-3153-4a29-848f-af7f293796a3-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.883824 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.883851 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f940b104-3153-4a29-848f-af7f293796a3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.883874 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f940b104-3153-4a29-848f-af7f293796a3-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.884012 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.884142 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.884046 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.884193 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.884242 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.884253 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:56.884231863 +0000 UTC m=+97.106316232 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.884265 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.884278 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.884311 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:56.884302025 +0000 UTC m=+97.106386604 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.884238 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.884351 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:56.884344116 +0000 UTC m=+97.106428715 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.884354 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.884376 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.884389 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.884439 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 14:16:56.884426489 +0000 UTC m=+97.106510858 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.985231 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.985374 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f940b104-3153-4a29-848f-af7f293796a3-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.985400 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f940b104-3153-4a29-848f-af7f293796a3-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.985421 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f940b104-3153-4a29-848f-af7f293796a3-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.985447 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs\") pod \"network-metrics-daemon-js5pl\" (UID: \"455fa20f-c1d4-4086-8874-9526d4c4d24d\") " pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.985490 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f940b104-3153-4a29-848f-af7f293796a3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.985511 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f940b104-3153-4a29-848f-af7f293796a3-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.985722 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f940b104-3153-4a29-848f-af7f293796a3-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.985818 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:16:56.985801356 +0000 UTC m=+97.207885715 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.985834 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f940b104-3153-4a29-848f-af7f293796a3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.985992 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:16:52 crc kubenswrapper[5110]: E0122 14:16:52.986061 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs podName:455fa20f-c1d4-4086-8874-9526d4c4d24d nodeName:}" failed. No retries permitted until 2026-01-22 14:16:56.986039322 +0000 UTC m=+97.208123701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs") pod "network-metrics-daemon-js5pl" (UID: "455fa20f-c1d4-4086-8874-9526d4c4d24d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.987188 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f940b104-3153-4a29-848f-af7f293796a3-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:52 crc kubenswrapper[5110]: I0122 14:16:52.993966 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f940b104-3153-4a29-848f-af7f293796a3-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.004143 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f940b104-3153-4a29-848f-af7f293796a3-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-p7q2t\" (UID: \"f940b104-3153-4a29-848f-af7f293796a3\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.093969 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" Jan 22 14:16:53 crc kubenswrapper[5110]: W0122 14:16:53.113076 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf940b104_3153_4a29_848f_af7f293796a3.slice/crio-f01e4d4098303ebdb1a692978aa61e634c0d4bf56d1aa41d6cad167aeddd4211 WatchSource:0}: Error finding container f01e4d4098303ebdb1a692978aa61e634c0d4bf56d1aa41d6cad167aeddd4211: Status 404 returned error can't find the container with id f01e4d4098303ebdb1a692978aa61e634c0d4bf56d1aa41d6cad167aeddd4211 Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.249507 5110 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.259279 5110 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.272191 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.272298 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:53 crc kubenswrapper[5110]: E0122 14:16:53.272300 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.272384 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:53 crc kubenswrapper[5110]: E0122 14:16:53.272531 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:16:53 crc kubenswrapper[5110]: E0122 14:16:53.272581 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-js5pl" podUID="455fa20f-c1d4-4086-8874-9526d4c4d24d" Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.272596 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:53 crc kubenswrapper[5110]: E0122 14:16:53.272695 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.724016 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" event={"ID":"f940b104-3153-4a29-848f-af7f293796a3","Type":"ContainerStarted","Data":"005563f66fbb291c0c687a6bf7607982341a0410e45f75ae23397a81810c2917"} Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.724088 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" event={"ID":"f940b104-3153-4a29-848f-af7f293796a3","Type":"ContainerStarted","Data":"f01e4d4098303ebdb1a692978aa61e634c0d4bf56d1aa41d6cad167aeddd4211"} Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.726538 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"b156e374aa63fec301bb3acf775d160374cc806e5bad49317278aa765ce5af06"} Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.729658 5110 generic.go:358] "Generic (PLEG): container finished" podID="56adef60-8300-4815-a17b-19370e323339" containerID="e3d91cc2acd0ed43fc54b891c962d276da4e24ef0201b491aadadcfbc95548a5" exitCode=0 Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.729742 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" event={"ID":"56adef60-8300-4815-a17b-19370e323339","Type":"ContainerDied","Data":"e3d91cc2acd0ed43fc54b891c962d276da4e24ef0201b491aadadcfbc95548a5"} Jan 22 14:16:53 crc kubenswrapper[5110]: I0122 14:16:53.791277 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-p7q2t" podStartSLOduration=73.791259456 podStartE2EDuration="1m13.791259456s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:53.757075084 +0000 UTC m=+93.979159503" watchObservedRunningTime="2026-01-22 14:16:53.791259456 +0000 UTC m=+94.013343825" Jan 22 14:16:54 crc kubenswrapper[5110]: I0122 14:16:54.736252 5110 generic.go:358] "Generic (PLEG): container finished" podID="56adef60-8300-4815-a17b-19370e323339" containerID="ba4e90258f0279fdfbdc15fa61c04771fad8e00ecfec627b29cf7cf5a32a394d" exitCode=0 Jan 22 14:16:54 crc kubenswrapper[5110]: I0122 14:16:54.736340 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" event={"ID":"56adef60-8300-4815-a17b-19370e323339","Type":"ContainerDied","Data":"ba4e90258f0279fdfbdc15fa61c04771fad8e00ecfec627b29cf7cf5a32a394d"} Jan 22 14:16:54 crc kubenswrapper[5110]: I0122 14:16:54.742908 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" event={"ID":"6327725f-9fd9-4ea3-b51f-3dfb27454a19","Type":"ContainerStarted","Data":"92575e36fe243957e51e990aa25fba399dfe462cd04ef2923fe3f4fa71b70a5d"} Jan 22 14:16:55 crc kubenswrapper[5110]: I0122 14:16:55.273370 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:55 crc kubenswrapper[5110]: I0122 14:16:55.273369 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:55 crc kubenswrapper[5110]: I0122 14:16:55.273382 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:55 crc kubenswrapper[5110]: E0122 14:16:55.274091 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-js5pl" podUID="455fa20f-c1d4-4086-8874-9526d4c4d24d" Jan 22 14:16:55 crc kubenswrapper[5110]: E0122 14:16:55.274214 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:16:55 crc kubenswrapper[5110]: I0122 14:16:55.273487 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:55 crc kubenswrapper[5110]: E0122 14:16:55.274326 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:16:55 crc kubenswrapper[5110]: E0122 14:16:55.274561 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:16:55 crc kubenswrapper[5110]: I0122 14:16:55.748829 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" event={"ID":"56adef60-8300-4815-a17b-19370e323339","Type":"ContainerStarted","Data":"d2689d3a5818dbfb6e9b122a39ca579ee14475f60891f2e972261fa498cb04f5"} Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.755817 5110 generic.go:358] "Generic (PLEG): container finished" podID="56adef60-8300-4815-a17b-19370e323339" containerID="d2689d3a5818dbfb6e9b122a39ca579ee14475f60891f2e972261fa498cb04f5" exitCode=0 Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.755941 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" event={"ID":"56adef60-8300-4815-a17b-19370e323339","Type":"ContainerDied","Data":"d2689d3a5818dbfb6e9b122a39ca579ee14475f60891f2e972261fa498cb04f5"} Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.769232 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" event={"ID":"6327725f-9fd9-4ea3-b51f-3dfb27454a19","Type":"ContainerStarted","Data":"77fada80169d63f0e75595f424e5fb0bbc6a60e469c07d889c7645d3360d0279"} Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.769853 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.769882 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.769924 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.801573 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.806283 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.820490 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" podStartSLOduration=76.82046852 podStartE2EDuration="1m16.82046852s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:56.818202961 +0000 UTC m=+97.040287320" watchObservedRunningTime="2026-01-22 14:16:56.82046852 +0000 UTC m=+97.042552879" Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.930903 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.930954 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.930974 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:56 crc kubenswrapper[5110]: E0122 14:16:56.931139 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:16:56 crc kubenswrapper[5110]: E0122 14:16:56.931178 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:16:56 crc kubenswrapper[5110]: E0122 14:16:56.931191 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:56 crc kubenswrapper[5110]: E0122 14:16:56.931211 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:16:56 crc kubenswrapper[5110]: I0122 14:16:56.931176 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:56 crc kubenswrapper[5110]: E0122 14:16:56.931143 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:16:56 crc kubenswrapper[5110]: E0122 14:16:56.931316 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:16:56 crc kubenswrapper[5110]: E0122 14:16:56.931342 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:56 crc kubenswrapper[5110]: E0122 14:16:56.931346 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:16:56 crc kubenswrapper[5110]: E0122 14:16:56.931260 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.931239836 +0000 UTC m=+105.153324195 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:56 crc kubenswrapper[5110]: E0122 14:16:56.931429 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.93141052 +0000 UTC m=+105.153494879 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:16:56 crc kubenswrapper[5110]: E0122 14:16:56.931448 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.931442631 +0000 UTC m=+105.153526990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:16:56 crc kubenswrapper[5110]: E0122 14:16:56.931472 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.931466002 +0000 UTC m=+105.153550361 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:16:57 crc kubenswrapper[5110]: I0122 14:16:57.033108 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:16:57 crc kubenswrapper[5110]: I0122 14:16:57.033296 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs\") pod \"network-metrics-daemon-js5pl\" (UID: \"455fa20f-c1d4-4086-8874-9526d4c4d24d\") " pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:57 crc kubenswrapper[5110]: E0122 14:16:57.033345 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.033310591 +0000 UTC m=+105.255394960 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:16:57 crc kubenswrapper[5110]: E0122 14:16:57.033453 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:16:57 crc kubenswrapper[5110]: E0122 14:16:57.033593 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs podName:455fa20f-c1d4-4086-8874-9526d4c4d24d nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.033565448 +0000 UTC m=+105.255649817 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs") pod "network-metrics-daemon-js5pl" (UID: "455fa20f-c1d4-4086-8874-9526d4c4d24d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:16:57 crc kubenswrapper[5110]: I0122 14:16:57.272844 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:57 crc kubenswrapper[5110]: I0122 14:16:57.272921 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:57 crc kubenswrapper[5110]: I0122 14:16:57.272869 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:57 crc kubenswrapper[5110]: I0122 14:16:57.272858 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:57 crc kubenswrapper[5110]: E0122 14:16:57.273017 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:16:57 crc kubenswrapper[5110]: E0122 14:16:57.273089 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-js5pl" podUID="455fa20f-c1d4-4086-8874-9526d4c4d24d" Jan 22 14:16:57 crc kubenswrapper[5110]: E0122 14:16:57.273164 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:16:57 crc kubenswrapper[5110]: E0122 14:16:57.273246 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:16:57 crc kubenswrapper[5110]: I0122 14:16:57.776381 5110 generic.go:358] "Generic (PLEG): container finished" podID="56adef60-8300-4815-a17b-19370e323339" containerID="3dffa86bbbb83af2ce173826f00302fd456c25cb6b257fafaff562978db1fe88" exitCode=0 Jan 22 14:16:57 crc kubenswrapper[5110]: I0122 14:16:57.776421 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" event={"ID":"56adef60-8300-4815-a17b-19370e323339","Type":"ContainerDied","Data":"3dffa86bbbb83af2ce173826f00302fd456c25cb6b257fafaff562978db1fe88"} Jan 22 14:16:58 crc kubenswrapper[5110]: I0122 14:16:58.784544 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" event={"ID":"56adef60-8300-4815-a17b-19370e323339","Type":"ContainerStarted","Data":"4fb0b32d56d2da267518fb8aa7ef66e1d5d87ab337d9243012c464e5bdf67981"} Jan 22 14:16:58 crc kubenswrapper[5110]: I0122 14:16:58.810942 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-b7k8l" podStartSLOduration=78.810925074 podStartE2EDuration="1m18.810925074s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:16:58.810710578 +0000 UTC m=+99.032794967" watchObservedRunningTime="2026-01-22 14:16:58.810925074 +0000 UTC m=+99.033009433" Jan 22 14:16:59 crc kubenswrapper[5110]: I0122 14:16:59.212688 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-js5pl"] Jan 22 14:16:59 crc kubenswrapper[5110]: I0122 14:16:59.213144 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:16:59 crc kubenswrapper[5110]: E0122 14:16:59.213255 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-js5pl" podUID="455fa20f-c1d4-4086-8874-9526d4c4d24d" Jan 22 14:16:59 crc kubenswrapper[5110]: I0122 14:16:59.273117 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:16:59 crc kubenswrapper[5110]: E0122 14:16:59.273266 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:16:59 crc kubenswrapper[5110]: I0122 14:16:59.273651 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:16:59 crc kubenswrapper[5110]: E0122 14:16:59.273732 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:16:59 crc kubenswrapper[5110]: I0122 14:16:59.273809 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:16:59 crc kubenswrapper[5110]: E0122 14:16:59.273877 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:17:00 crc kubenswrapper[5110]: I0122 14:17:00.280844 5110 scope.go:117] "RemoveContainer" containerID="d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07" Jan 22 14:17:00 crc kubenswrapper[5110]: E0122 14:17:00.281050 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 14:17:01 crc kubenswrapper[5110]: I0122 14:17:01.273201 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:17:01 crc kubenswrapper[5110]: I0122 14:17:01.273253 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:17:01 crc kubenswrapper[5110]: I0122 14:17:01.273267 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:17:01 crc kubenswrapper[5110]: E0122 14:17:01.273473 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:17:01 crc kubenswrapper[5110]: E0122 14:17:01.274023 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:17:01 crc kubenswrapper[5110]: E0122 14:17:01.274213 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:17:01 crc kubenswrapper[5110]: I0122 14:17:01.274349 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:17:01 crc kubenswrapper[5110]: E0122 14:17:01.274545 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-js5pl" podUID="455fa20f-c1d4-4086-8874-9526d4c4d24d" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.272857 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.272869 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:17:03 crc kubenswrapper[5110]: E0122 14:17:03.273018 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.272887 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.272865 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:17:03 crc kubenswrapper[5110]: E0122 14:17:03.273132 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-js5pl" podUID="455fa20f-c1d4-4086-8874-9526d4c4d24d" Jan 22 14:17:03 crc kubenswrapper[5110]: E0122 14:17:03.273207 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 14:17:03 crc kubenswrapper[5110]: E0122 14:17:03.273314 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.380353 5110 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.380520 5110 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.420926 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-knd9m"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.676461 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-sxktl"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.676568 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.680500 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.680611 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.681311 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.681339 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.680511 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.682744 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-rfm46"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.682932 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.684523 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.684820 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.685091 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.685252 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.685426 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.685465 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-qq9vl"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.685605 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.685850 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.686167 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.686275 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.686393 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.686515 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.686585 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.689257 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.691187 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.692502 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.692562 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.693242 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.694013 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.694319 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.698149 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.698458 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.698466 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.700788 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.701102 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.701267 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.701377 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.701477 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.701660 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.701854 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.703001 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.703279 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.703495 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.703597 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.707208 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.710868 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.711276 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.711563 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.712853 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.712907 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-x4jp5"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.714216 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.714346 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.714374 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.714488 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.714730 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.716060 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-sxktl"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.716098 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pgzmf"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.716368 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.716428 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.718864 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.719132 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.719304 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-s875b"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.719361 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.719467 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.722882 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.723471 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-pk2pv"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.724768 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.725227 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.726507 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.726594 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.726985 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.727004 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.727286 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.727376 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.727937 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.728243 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.728340 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.728398 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.728365 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.728484 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.728640 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.728740 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.728823 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.728914 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.736375 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.742798 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-swx4b"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.742798 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.742801 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.744439 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.745913 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.749609 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.750048 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.750140 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.750229 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.750287 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.750458 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.753265 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.753415 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.753469 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.753681 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-swx4b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.753948 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.754475 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.755889 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-qkzl4"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.756015 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.756400 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.756653 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.757519 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.759165 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.761570 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-74rfw"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.761909 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.762126 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.763578 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.764108 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.764297 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.764526 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.764708 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.764988 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.765753 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-ntc52"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.765861 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.766392 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.769059 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.769343 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.769440 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.770306 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.772082 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.774151 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.775021 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.775087 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-ntc52" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.775191 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.788765 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-584j5"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.788929 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.792328 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.792498 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.795842 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-knd9m"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.795969 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.796141 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.799191 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-flfnb"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.799449 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.800197 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.803037 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-fc65d"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.803287 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.806694 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.806821 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.811788 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.811881 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.814645 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7af25bf8-c994-4704-821b-ee6df60d64f1-serving-cert\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.814695 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-audit-dir\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.814721 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7af25bf8-c994-4704-821b-ee6df60d64f1-node-pullsecrets\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.814746 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ad86e7-ae45-45ea-b4f5-ea725569075a-config\") pod \"machine-api-operator-755bb95488-sxktl\" (UID: \"e7ad86e7-ae45-45ea-b4f5-ea725569075a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.814783 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7ad86e7-ae45-45ea-b4f5-ea725569075a-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-sxktl\" (UID: \"e7ad86e7-ae45-45ea-b4f5-ea725569075a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.814807 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3170f172-b46b-4670-94e6-a340749c97e4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.814829 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn7nn\" (UniqueName: \"kubernetes.io/projected/3170f172-b46b-4670-94e6-a340749c97e4-kube-api-access-jn7nn\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.814727 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.814852 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595a4ab3-66a1-41ae-93e2-9476c1b14270-registry-certificates\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.814923 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815000 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e7ad86e7-ae45-45ea-b4f5-ea725569075a-images\") pod \"machine-api-operator-755bb95488-sxktl\" (UID: \"e7ad86e7-ae45-45ea-b4f5-ea725569075a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815058 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bdf8fc3-f5ff-4721-b79f-539fec1dabd5-trusted-ca\") pod \"console-operator-67c89758df-s875b\" (UID: \"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5\") " pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815090 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3170f172-b46b-4670-94e6-a340749c97e4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815111 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595a4ab3-66a1-41ae-93e2-9476c1b14270-installation-pull-secrets\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815172 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bdf8fc3-f5ff-4721-b79f-539fec1dabd5-config\") pod \"console-operator-67c89758df-s875b\" (UID: \"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5\") " pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815192 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c99795b-25a0-4c75-87ba-3c72c10f621d-console-config\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815229 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-config\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815264 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815330 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-config\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815353 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3170f172-b46b-4670-94e6-a340749c97e4-serving-cert\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815396 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xmv6\" (UniqueName: \"kubernetes.io/projected/7af25bf8-c994-4704-821b-ee6df60d64f1-kube-api-access-9xmv6\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815423 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815466 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0202df03-1e3b-4cb3-a279-a6376a61ac6a-config\") pod \"machine-approver-54c688565-rfm46\" (UID: \"0202df03-1e3b-4cb3-a279-a6376a61ac6a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815492 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c99795b-25a0-4c75-87ba-3c72c10f621d-service-ca\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815513 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: E0122 14:17:03.815549 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.315533014 +0000 UTC m=+104.537617373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815577 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815603 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0202df03-1e3b-4cb3-a279-a6376a61ac6a-auth-proxy-config\") pod \"machine-approver-54c688565-rfm46\" (UID: \"0202df03-1e3b-4cb3-a279-a6376a61ac6a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815643 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw2j7\" (UniqueName: \"kubernetes.io/projected/0202df03-1e3b-4cb3-a279-a6376a61ac6a-kube-api-access-vw2j7\") pod \"machine-approver-54c688565-rfm46\" (UID: \"0202df03-1e3b-4cb3-a279-a6376a61ac6a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815665 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c99795b-25a0-4c75-87ba-3c72c10f621d-console-oauth-config\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815692 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815723 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mfcr\" (UniqueName: \"kubernetes.io/projected/a3216b0f-6d90-469f-a674-5bfeb6bafb5c-kube-api-access-4mfcr\") pod \"openshift-apiserver-operator-846cbfc458-w84j9\" (UID: \"a3216b0f-6d90-469f-a674-5bfeb6bafb5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815752 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c99795b-25a0-4c75-87ba-3c72c10f621d-console-serving-cert\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815791 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815808 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-registry-tls\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815862 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815898 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-serving-cert\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815929 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7af25bf8-c994-4704-821b-ee6df60d64f1-etcd-client\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815956 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.815984 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2szb\" (UniqueName: \"kubernetes.io/projected/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-kube-api-access-t2szb\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816007 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lmz8\" (UniqueName: \"kubernetes.io/projected/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-kube-api-access-9lmz8\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816028 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3170f172-b46b-4670-94e6-a340749c97e4-config\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816056 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28t4b\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-kube-api-access-28t4b\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816085 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-client-ca\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816134 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s8sd\" (UniqueName: \"kubernetes.io/projected/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-kube-api-access-7s8sd\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816195 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3216b0f-6d90-469f-a674-5bfeb6bafb5c-config\") pod \"openshift-apiserver-operator-846cbfc458-w84j9\" (UID: \"a3216b0f-6d90-469f-a674-5bfeb6bafb5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816262 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-tmp\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816287 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816346 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-serving-cert\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816372 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595a4ab3-66a1-41ae-93e2-9476c1b14270-ca-trust-extracted\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816394 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c99795b-25a0-4c75-87ba-3c72c10f621d-oauth-serving-cert\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816463 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fptcm\" (UniqueName: \"kubernetes.io/projected/5c99795b-25a0-4c75-87ba-3c72c10f621d-kube-api-access-fptcm\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816557 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816581 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7af25bf8-c994-4704-821b-ee6df60d64f1-encryption-config\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816630 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-audit-policies\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816683 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816741 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3216b0f-6d90-469f-a674-5bfeb6bafb5c-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-w84j9\" (UID: \"a3216b0f-6d90-469f-a674-5bfeb6bafb5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816778 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816827 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckv46\" (UniqueName: \"kubernetes.io/projected/e7ad86e7-ae45-45ea-b4f5-ea725569075a-kube-api-access-ckv46\") pod \"machine-api-operator-755bb95488-sxktl\" (UID: \"e7ad86e7-ae45-45ea-b4f5-ea725569075a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.816848 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-image-import-ca\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817009 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bdf8fc3-f5ff-4721-b79f-539fec1dabd5-serving-cert\") pod \"console-operator-67c89758df-s875b\" (UID: \"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5\") " pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817083 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvdz\" (UniqueName: \"kubernetes.io/projected/1bdf8fc3-f5ff-4721-b79f-539fec1dabd5-kube-api-access-sqvdz\") pod \"console-operator-67c89758df-s875b\" (UID: \"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5\") " pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817113 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7af25bf8-c994-4704-821b-ee6df60d64f1-audit-dir\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817159 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817180 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817200 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-client-ca\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817253 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c99795b-25a0-4c75-87ba-3c72c10f621d-trusted-ca-bundle\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817313 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817338 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595a4ab3-66a1-41ae-93e2-9476c1b14270-trusted-ca\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817357 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-bound-sa-token\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817406 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0202df03-1e3b-4cb3-a279-a6376a61ac6a-machine-approver-tls\") pod \"machine-approver-54c688565-rfm46\" (UID: \"0202df03-1e3b-4cb3-a279-a6376a61ac6a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817448 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-config\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817476 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-audit\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.817504 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-tmp\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.818777 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.818893 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.821239 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.822933 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.823080 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.826763 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.826915 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.830192 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.830387 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.830573 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.837566 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.837736 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.840054 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.844913 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.845023 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.849712 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.850250 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.852261 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-f899n"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.852613 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.857450 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.857693 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.859947 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.860893 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rjmdz"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.861109 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.866216 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-x4jp5"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.866496 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.866527 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.866512 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.869458 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-hggst"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.869584 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.872123 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-lvg4h"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.872245 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-hggst" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.874727 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.874857 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-lvg4h" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.880367 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.881057 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.881356 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.884853 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pgzmf"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.884893 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.884913 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-s875b"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.884929 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-swx4b"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.884945 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-ntc52"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.884960 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.884973 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-pk2pv"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.884987 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-qq9vl"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.885003 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-584j5"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.885017 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.885032 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.885045 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.885056 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.885066 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.885109 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.885119 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.885132 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-74rfw"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.885144 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-fc65d"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.885156 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8jstw"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.885120 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.889017 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.889043 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-flfnb"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.889054 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.889064 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.889100 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.889112 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.889122 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.889133 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-f899n"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.889145 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-l7rdn"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.889400 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.893726 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.893866 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8jstw"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.893891 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.893907 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-lvg4h"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.893919 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.893932 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-4zpj9"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.894169 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l7rdn" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.897207 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-hggst"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.897239 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-l7rdn"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.897250 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc"] Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.897348 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4zpj9" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.899742 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.917909 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:03 crc kubenswrapper[5110]: E0122 14:17:03.918056 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.418034981 +0000 UTC m=+104.640119340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.918180 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595a4ab3-66a1-41ae-93e2-9476c1b14270-registry-certificates\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.918266 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e7ad86e7-ae45-45ea-b4f5-ea725569075a-images\") pod \"machine-api-operator-755bb95488-sxktl\" (UID: \"e7ad86e7-ae45-45ea-b4f5-ea725569075a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.918287 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bdf8fc3-f5ff-4721-b79f-539fec1dabd5-trusted-ca\") pod \"console-operator-67c89758df-s875b\" (UID: \"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5\") " pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.918306 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/abca4105-161b-4d77-9d70-35b13bbcabfd-tmp-dir\") pod \"dns-default-flfnb\" (UID: \"abca4105-161b-4d77-9d70-35b13bbcabfd\") " pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.918323 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlprj\" (UniqueName: \"kubernetes.io/projected/e867811f-a825-4545-9591-a00087eb4e33-kube-api-access-mlprj\") pod \"dns-operator-799b87ffcd-f899n\" (UID: \"e867811f-a825-4545-9591-a00087eb4e33\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.918340 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/abc0de56-1146-46a6-8b5b-68373a09ba37-default-certificate\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.918365 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc0de56-1146-46a6-8b5b-68373a09ba37-service-ca-bundle\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919013 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3170f172-b46b-4670-94e6-a340749c97e4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919053 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595a4ab3-66a1-41ae-93e2-9476c1b14270-installation-pull-secrets\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919077 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff17a5cc-b922-4d50-8639-eb19d9c97069-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-zbf7n\" (UID: \"ff17a5cc-b922-4d50-8639-eb19d9c97069\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919109 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bdf8fc3-f5ff-4721-b79f-539fec1dabd5-config\") pod \"console-operator-67c89758df-s875b\" (UID: \"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5\") " pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919125 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c99795b-25a0-4c75-87ba-3c72c10f621d-console-config\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919143 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-config\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919165 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919181 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-tmp\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919199 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abc0de56-1146-46a6-8b5b-68373a09ba37-metrics-certs\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919218 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-config\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919253 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3170f172-b46b-4670-94e6-a340749c97e4-serving-cert\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919275 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xmv6\" (UniqueName: \"kubernetes.io/projected/7af25bf8-c994-4704-821b-ee6df60d64f1-kube-api-access-9xmv6\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919292 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919309 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0202df03-1e3b-4cb3-a279-a6376a61ac6a-config\") pod \"machine-approver-54c688565-rfm46\" (UID: \"0202df03-1e3b-4cb3-a279-a6376a61ac6a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919325 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c99795b-25a0-4c75-87ba-3c72c10f621d-service-ca\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919343 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919358 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919378 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0202df03-1e3b-4cb3-a279-a6376a61ac6a-auth-proxy-config\") pod \"machine-approver-54c688565-rfm46\" (UID: \"0202df03-1e3b-4cb3-a279-a6376a61ac6a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919401 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vw2j7\" (UniqueName: \"kubernetes.io/projected/0202df03-1e3b-4cb3-a279-a6376a61ac6a-kube-api-access-vw2j7\") pod \"machine-approver-54c688565-rfm46\" (UID: \"0202df03-1e3b-4cb3-a279-a6376a61ac6a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919440 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c99795b-25a0-4c75-87ba-3c72c10f621d-console-oauth-config\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919457 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/863750d1-e921-4e7e-b99b-365391af8edf-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919464 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e7ad86e7-ae45-45ea-b4f5-ea725569075a-images\") pod \"machine-api-operator-755bb95488-sxktl\" (UID: \"e7ad86e7-ae45-45ea-b4f5-ea725569075a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919480 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919497 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4mfcr\" (UniqueName: \"kubernetes.io/projected/a3216b0f-6d90-469f-a674-5bfeb6bafb5c-kube-api-access-4mfcr\") pod \"openshift-apiserver-operator-846cbfc458-w84j9\" (UID: \"a3216b0f-6d90-469f-a674-5bfeb6bafb5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919516 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c99795b-25a0-4c75-87ba-3c72c10f621d-console-serving-cert\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919548 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919564 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-registry-tls\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919579 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d1f0610-507b-49ed-9396-89d0fd379fb4-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-w87x4\" (UID: \"9d1f0610-507b-49ed-9396-89d0fd379fb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919609 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919662 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8280d23b-d6cf-40ad-996e-d148f43bd0dd-tmpfs\") pod \"catalog-operator-75ff9f647d-c7wsk\" (UID: \"8280d23b-d6cf-40ad-996e-d148f43bd0dd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919682 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919697 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-serving-cert\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919714 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf-config\") pod \"service-ca-operator-5b9c976747-2vcrx\" (UID: \"526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919738 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7af25bf8-c994-4704-821b-ee6df60d64f1-etcd-client\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919755 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919772 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t2szb\" (UniqueName: \"kubernetes.io/projected/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-kube-api-access-t2szb\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919805 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9lmz8\" (UniqueName: \"kubernetes.io/projected/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-kube-api-access-9lmz8\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: E0122 14:17:03.919826 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.419814338 +0000 UTC m=+104.641898787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919853 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3170f172-b46b-4670-94e6-a340749c97e4-config\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919905 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-28t4b\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-kube-api-access-28t4b\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919940 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-client-ca\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919962 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7s8sd\" (UniqueName: \"kubernetes.io/projected/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-kube-api-access-7s8sd\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.920009 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3216b0f-6d90-469f-a674-5bfeb6bafb5c-config\") pod \"openshift-apiserver-operator-846cbfc458-w84j9\" (UID: \"a3216b0f-6d90-469f-a674-5bfeb6bafb5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.920040 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-tmp\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.920057 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.920079 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/072ff625-701b-439b-841f-07ca74f91eee-serving-cert\") pod \"openshift-config-operator-5777786469-fc65d\" (UID: \"072ff625-701b-439b-841f-07ca74f91eee\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.920096 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863750d1-e921-4e7e-b99b-365391af8edf-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.920129 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863750d1-e921-4e7e-b99b-365391af8edf-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.920158 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-serving-cert\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.920175 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595a4ab3-66a1-41ae-93e2-9476c1b14270-ca-trust-extracted\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.920530 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c99795b-25a0-4c75-87ba-3c72c10f621d-oauth-serving-cert\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.920125 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-config\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.920613 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.921035 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595a4ab3-66a1-41ae-93e2-9476c1b14270-ca-trust-extracted\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.921302 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-tmp\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.922126 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.923397 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bdf8fc3-f5ff-4721-b79f-539fec1dabd5-config\") pod \"console-operator-67c89758df-s875b\" (UID: \"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5\") " pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.923960 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0202df03-1e3b-4cb3-a279-a6376a61ac6a-config\") pod \"machine-approver-54c688565-rfm46\" (UID: \"0202df03-1e3b-4cb3-a279-a6376a61ac6a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.919492 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bdf8fc3-f5ff-4721-b79f-539fec1dabd5-trusted-ca\") pod \"console-operator-67c89758df-s875b\" (UID: \"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5\") " pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.924365 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fptcm\" (UniqueName: \"kubernetes.io/projected/5c99795b-25a0-4c75-87ba-3c72c10f621d-kube-api-access-fptcm\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.924469 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863750d1-e921-4e7e-b99b-365391af8edf-config\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.924502 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e867811f-a825-4545-9591-a00087eb4e33-metrics-tls\") pod \"dns-operator-799b87ffcd-f899n\" (UID: \"e867811f-a825-4545-9591-a00087eb4e33\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.924545 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff17a5cc-b922-4d50-8639-eb19d9c97069-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-zbf7n\" (UID: \"ff17a5cc-b922-4d50-8639-eb19d9c97069\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.924572 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/abc0de56-1146-46a6-8b5b-68373a09ba37-stats-auth\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.924595 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac1adcf2-2577-4d5c-86d8-7c21ccc049fb-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-72qpw\" (UID: \"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.924633 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac1adcf2-2577-4d5c-86d8-7c21ccc049fb-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-72qpw\" (UID: \"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.924658 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dkf9\" (UniqueName: \"kubernetes.io/projected/ac1adcf2-2577-4d5c-86d8-7c21ccc049fb-kube-api-access-6dkf9\") pod \"ingress-operator-6b9cb4dbcf-72qpw\" (UID: \"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.924723 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.924963 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3170f172-b46b-4670-94e6-a340749c97e4-config\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.925531 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-client-ca\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.925666 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-config\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.925781 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7af25bf8-c994-4704-821b-ee6df60d64f1-encryption-config\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.925823 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rbnc\" (UniqueName: \"kubernetes.io/projected/abc0de56-1146-46a6-8b5b-68373a09ba37-kube-api-access-5rbnc\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.925841 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4xlv\" (UniqueName: \"kubernetes.io/projected/8280d23b-d6cf-40ad-996e-d148f43bd0dd-kube-api-access-h4xlv\") pod \"catalog-operator-75ff9f647d-c7wsk\" (UID: \"8280d23b-d6cf-40ad-996e-d148f43bd0dd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926011 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926148 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-audit-policies\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926189 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d1f0610-507b-49ed-9396-89d0fd379fb4-config\") pod \"kube-storage-version-migrator-operator-565b79b866-w87x4\" (UID: \"9d1f0610-507b-49ed-9396-89d0fd379fb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926232 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztvnt\" (UniqueName: \"kubernetes.io/projected/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-kube-api-access-ztvnt\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926262 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926274 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926287 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3216b0f-6d90-469f-a674-5bfeb6bafb5c-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-w84j9\" (UID: \"a3216b0f-6d90-469f-a674-5bfeb6bafb5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926361 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abca4105-161b-4d77-9d70-35b13bbcabfd-config-volume\") pod \"dns-default-flfnb\" (UID: \"abca4105-161b-4d77-9d70-35b13bbcabfd\") " pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926449 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926524 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8280d23b-d6cf-40ad-996e-d148f43bd0dd-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-c7wsk\" (UID: \"8280d23b-d6cf-40ad-996e-d148f43bd0dd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926555 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926597 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ckv46\" (UniqueName: \"kubernetes.io/projected/e7ad86e7-ae45-45ea-b4f5-ea725569075a-kube-api-access-ckv46\") pod \"machine-api-operator-755bb95488-sxktl\" (UID: \"e7ad86e7-ae45-45ea-b4f5-ea725569075a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926642 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrktg\" (UniqueName: \"kubernetes.io/projected/9d1f0610-507b-49ed-9396-89d0fd379fb4-kube-api-access-hrktg\") pod \"kube-storage-version-migrator-operator-565b79b866-w87x4\" (UID: \"9d1f0610-507b-49ed-9396-89d0fd379fb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926721 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf-serving-cert\") pod \"service-ca-operator-5b9c976747-2vcrx\" (UID: \"526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926741 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbsvz\" (UniqueName: \"kubernetes.io/projected/526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf-kube-api-access-mbsvz\") pod \"service-ca-operator-5b9c976747-2vcrx\" (UID: \"526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926768 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-image-import-ca\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926810 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bdf8fc3-f5ff-4721-b79f-539fec1dabd5-serving-cert\") pod \"console-operator-67c89758df-s875b\" (UID: \"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5\") " pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926831 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qg5g\" (UniqueName: \"kubernetes.io/projected/072ff625-701b-439b-841f-07ca74f91eee-kube-api-access-9qg5g\") pod \"openshift-config-operator-5777786469-fc65d\" (UID: \"072ff625-701b-439b-841f-07ca74f91eee\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926849 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e867811f-a825-4545-9591-a00087eb4e33-tmp-dir\") pod \"dns-operator-799b87ffcd-f899n\" (UID: \"e867811f-a825-4545-9591-a00087eb4e33\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926977 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcgx2\" (UniqueName: \"kubernetes.io/projected/41b28f5f-47e9-49c2-a0f3-efb26640b87f-kube-api-access-wcgx2\") pod \"migrator-866fcbc849-ntc52\" (UID: \"41b28f5f-47e9-49c2-a0f3-efb26640b87f\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-ntc52" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.926997 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8280d23b-d6cf-40ad-996e-d148f43bd0dd-srv-cert\") pod \"catalog-operator-75ff9f647d-c7wsk\" (UID: \"8280d23b-d6cf-40ad-996e-d148f43bd0dd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.927415 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595a4ab3-66a1-41ae-93e2-9476c1b14270-registry-certificates\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.927551 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.927581 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac1adcf2-2577-4d5c-86d8-7c21ccc049fb-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-72qpw\" (UID: \"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.927693 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595a4ab3-66a1-41ae-93e2-9476c1b14270-installation-pull-secrets\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.927710 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c99795b-25a0-4c75-87ba-3c72c10f621d-console-serving-cert\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.927902 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sqvdz\" (UniqueName: \"kubernetes.io/projected/1bdf8fc3-f5ff-4721-b79f-539fec1dabd5-kube-api-access-sqvdz\") pod \"console-operator-67c89758df-s875b\" (UID: \"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5\") " pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928057 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7af25bf8-c994-4704-821b-ee6df60d64f1-audit-dir\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928099 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928150 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928178 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-client-ca\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928206 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c99795b-25a0-4c75-87ba-3c72c10f621d-service-ca\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928243 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-audit-policies\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928311 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c99795b-25a0-4c75-87ba-3c72c10f621d-console-oauth-config\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928315 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c99795b-25a0-4c75-87ba-3c72c10f621d-oauth-serving-cert\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928352 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7af25bf8-c994-4704-821b-ee6df60d64f1-audit-dir\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928188 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3170f172-b46b-4670-94e6-a340749c97e4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928395 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c99795b-25a0-4c75-87ba-3c72c10f621d-trusted-ca-bundle\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928689 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-image-import-ca\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.928698 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3216b0f-6d90-469f-a674-5bfeb6bafb5c-config\") pod \"openshift-apiserver-operator-846cbfc458-w84j9\" (UID: \"a3216b0f-6d90-469f-a674-5bfeb6bafb5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.930846 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c99795b-25a0-4c75-87ba-3c72c10f621d-console-config\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.930935 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-serving-cert\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.931461 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0202df03-1e3b-4cb3-a279-a6376a61ac6a-auth-proxy-config\") pod \"machine-approver-54c688565-rfm46\" (UID: \"0202df03-1e3b-4cb3-a279-a6376a61ac6a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.931645 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bdf8fc3-f5ff-4721-b79f-539fec1dabd5-serving-cert\") pod \"console-operator-67c89758df-s875b\" (UID: \"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5\") " pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.931649 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-client-ca\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.931948 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3170f172-b46b-4670-94e6-a340749c97e4-serving-cert\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.932248 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7af25bf8-c994-4704-821b-ee6df60d64f1-etcd-client\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.932271 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.932542 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.932662 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7af25bf8-c994-4704-821b-ee6df60d64f1-encryption-config\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.932923 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.933437 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-serving-cert\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.933446 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.933496 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/abca4105-161b-4d77-9d70-35b13bbcabfd-metrics-tls\") pod \"dns-default-flfnb\" (UID: \"abca4105-161b-4d77-9d70-35b13bbcabfd\") " pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.933734 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqqqd\" (UniqueName: \"kubernetes.io/projected/ff17a5cc-b922-4d50-8639-eb19d9c97069-kube-api-access-qqqqd\") pod \"machine-config-controller-f9cdd68f7-zbf7n\" (UID: \"ff17a5cc-b922-4d50-8639-eb19d9c97069\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.933776 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3216b0f-6d90-469f-a674-5bfeb6bafb5c-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-w84j9\" (UID: \"a3216b0f-6d90-469f-a674-5bfeb6bafb5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.933831 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.933848 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c99795b-25a0-4c75-87ba-3c72c10f621d-trusted-ca-bundle\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.933864 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595a4ab3-66a1-41ae-93e2-9476c1b14270-trusted-ca\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934394 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934470 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-bound-sa-token\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934514 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0202df03-1e3b-4cb3-a279-a6376a61ac6a-machine-approver-tls\") pod \"machine-approver-54c688565-rfm46\" (UID: \"0202df03-1e3b-4cb3-a279-a6376a61ac6a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934543 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934645 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-config\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934679 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-audit\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934700 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-tmp\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934770 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595a4ab3-66a1-41ae-93e2-9476c1b14270-trusted-ca\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934774 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/072ff625-701b-439b-841f-07ca74f91eee-available-featuregates\") pod \"openshift-config-operator-5777786469-fc65d\" (UID: \"072ff625-701b-439b-841f-07ca74f91eee\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934837 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7af25bf8-c994-4704-821b-ee6df60d64f1-serving-cert\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934863 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-audit-dir\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934892 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jvct\" (UniqueName: \"kubernetes.io/projected/abca4105-161b-4d77-9d70-35b13bbcabfd-kube-api-access-7jvct\") pod \"dns-default-flfnb\" (UID: \"abca4105-161b-4d77-9d70-35b13bbcabfd\") " pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.934912 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.935029 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-tmp\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.935804 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7af25bf8-c994-4704-821b-ee6df60d64f1-node-pullsecrets\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.935870 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ad86e7-ae45-45ea-b4f5-ea725569075a-config\") pod \"machine-api-operator-755bb95488-sxktl\" (UID: \"e7ad86e7-ae45-45ea-b4f5-ea725569075a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.935878 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-audit-dir\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.935883 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7af25bf8-c994-4704-821b-ee6df60d64f1-node-pullsecrets\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.935935 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7ad86e7-ae45-45ea-b4f5-ea725569075a-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-sxktl\" (UID: \"e7ad86e7-ae45-45ea-b4f5-ea725569075a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.935968 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7af25bf8-c994-4704-821b-ee6df60d64f1-audit\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.936038 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3170f172-b46b-4670-94e6-a340749c97e4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.936118 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jn7nn\" (UniqueName: \"kubernetes.io/projected/3170f172-b46b-4670-94e6-a340749c97e4-kube-api-access-jn7nn\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.936774 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ad86e7-ae45-45ea-b4f5-ea725569075a-config\") pod \"machine-api-operator-755bb95488-sxktl\" (UID: \"e7ad86e7-ae45-45ea-b4f5-ea725569075a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.937210 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3170f172-b46b-4670-94e6-a340749c97e4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.939023 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.939742 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0202df03-1e3b-4cb3-a279-a6376a61ac6a-machine-approver-tls\") pod \"machine-approver-54c688565-rfm46\" (UID: \"0202df03-1e3b-4cb3-a279-a6376a61ac6a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.939778 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.940244 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7ad86e7-ae45-45ea-b4f5-ea725569075a-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-sxktl\" (UID: \"e7ad86e7-ae45-45ea-b4f5-ea725569075a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.940304 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.940414 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-registry-tls\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.940789 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.940329 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.940979 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-config\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.942668 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.943043 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7af25bf8-c994-4704-821b-ee6df60d64f1-serving-cert\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.961245 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 22 14:17:03 crc kubenswrapper[5110]: I0122 14:17:03.981078 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.000378 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.020610 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037102 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037216 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/abca4105-161b-4d77-9d70-35b13bbcabfd-tmp-dir\") pod \"dns-default-flfnb\" (UID: \"abca4105-161b-4d77-9d70-35b13bbcabfd\") " pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037242 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mlprj\" (UniqueName: \"kubernetes.io/projected/e867811f-a825-4545-9591-a00087eb4e33-kube-api-access-mlprj\") pod \"dns-operator-799b87ffcd-f899n\" (UID: \"e867811f-a825-4545-9591-a00087eb4e33\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037263 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/abc0de56-1146-46a6-8b5b-68373a09ba37-default-certificate\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037384 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc0de56-1146-46a6-8b5b-68373a09ba37-service-ca-bundle\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037420 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff17a5cc-b922-4d50-8639-eb19d9c97069-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-zbf7n\" (UID: \"ff17a5cc-b922-4d50-8639-eb19d9c97069\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037446 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-tmp\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037462 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abc0de56-1146-46a6-8b5b-68373a09ba37-metrics-certs\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037488 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/863750d1-e921-4e7e-b99b-365391af8edf-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037517 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d1f0610-507b-49ed-9396-89d0fd379fb4-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-w87x4\" (UID: \"9d1f0610-507b-49ed-9396-89d0fd379fb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037544 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037559 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8280d23b-d6cf-40ad-996e-d148f43bd0dd-tmpfs\") pod \"catalog-operator-75ff9f647d-c7wsk\" (UID: \"8280d23b-d6cf-40ad-996e-d148f43bd0dd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037581 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf-config\") pod \"service-ca-operator-5b9c976747-2vcrx\" (UID: \"526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037629 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/072ff625-701b-439b-841f-07ca74f91eee-serving-cert\") pod \"openshift-config-operator-5777786469-fc65d\" (UID: \"072ff625-701b-439b-841f-07ca74f91eee\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037603 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/abca4105-161b-4d77-9d70-35b13bbcabfd-tmp-dir\") pod \"dns-default-flfnb\" (UID: \"abca4105-161b-4d77-9d70-35b13bbcabfd\") " pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037649 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863750d1-e921-4e7e-b99b-365391af8edf-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037665 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863750d1-e921-4e7e-b99b-365391af8edf-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037692 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863750d1-e921-4e7e-b99b-365391af8edf-config\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037710 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e867811f-a825-4545-9591-a00087eb4e33-metrics-tls\") pod \"dns-operator-799b87ffcd-f899n\" (UID: \"e867811f-a825-4545-9591-a00087eb4e33\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037725 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff17a5cc-b922-4d50-8639-eb19d9c97069-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-zbf7n\" (UID: \"ff17a5cc-b922-4d50-8639-eb19d9c97069\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.037740 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/abc0de56-1146-46a6-8b5b-68373a09ba37-stats-auth\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.038033 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.53800822 +0000 UTC m=+104.760092579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038228 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc0de56-1146-46a6-8b5b-68373a09ba37-service-ca-bundle\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038456 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac1adcf2-2577-4d5c-86d8-7c21ccc049fb-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-72qpw\" (UID: \"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038502 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac1adcf2-2577-4d5c-86d8-7c21ccc049fb-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-72qpw\" (UID: \"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038533 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6dkf9\" (UniqueName: \"kubernetes.io/projected/ac1adcf2-2577-4d5c-86d8-7c21ccc049fb-kube-api-access-6dkf9\") pod \"ingress-operator-6b9cb4dbcf-72qpw\" (UID: \"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038570 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5rbnc\" (UniqueName: \"kubernetes.io/projected/abc0de56-1146-46a6-8b5b-68373a09ba37-kube-api-access-5rbnc\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038593 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h4xlv\" (UniqueName: \"kubernetes.io/projected/8280d23b-d6cf-40ad-996e-d148f43bd0dd-kube-api-access-h4xlv\") pod \"catalog-operator-75ff9f647d-c7wsk\" (UID: \"8280d23b-d6cf-40ad-996e-d148f43bd0dd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038645 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d1f0610-507b-49ed-9396-89d0fd379fb4-config\") pod \"kube-storage-version-migrator-operator-565b79b866-w87x4\" (UID: \"9d1f0610-507b-49ed-9396-89d0fd379fb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038672 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ztvnt\" (UniqueName: \"kubernetes.io/projected/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-kube-api-access-ztvnt\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038704 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abca4105-161b-4d77-9d70-35b13bbcabfd-config-volume\") pod \"dns-default-flfnb\" (UID: \"abca4105-161b-4d77-9d70-35b13bbcabfd\") " pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038736 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038759 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8280d23b-d6cf-40ad-996e-d148f43bd0dd-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-c7wsk\" (UID: \"8280d23b-d6cf-40ad-996e-d148f43bd0dd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038784 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hrktg\" (UniqueName: \"kubernetes.io/projected/9d1f0610-507b-49ed-9396-89d0fd379fb4-kube-api-access-hrktg\") pod \"kube-storage-version-migrator-operator-565b79b866-w87x4\" (UID: \"9d1f0610-507b-49ed-9396-89d0fd379fb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038807 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf-serving-cert\") pod \"service-ca-operator-5b9c976747-2vcrx\" (UID: \"526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038828 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mbsvz\" (UniqueName: \"kubernetes.io/projected/526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf-kube-api-access-mbsvz\") pod \"service-ca-operator-5b9c976747-2vcrx\" (UID: \"526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038876 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9qg5g\" (UniqueName: \"kubernetes.io/projected/072ff625-701b-439b-841f-07ca74f91eee-kube-api-access-9qg5g\") pod \"openshift-config-operator-5777786469-fc65d\" (UID: \"072ff625-701b-439b-841f-07ca74f91eee\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038899 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e867811f-a825-4545-9591-a00087eb4e33-tmp-dir\") pod \"dns-operator-799b87ffcd-f899n\" (UID: \"e867811f-a825-4545-9591-a00087eb4e33\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038925 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wcgx2\" (UniqueName: \"kubernetes.io/projected/41b28f5f-47e9-49c2-a0f3-efb26640b87f-kube-api-access-wcgx2\") pod \"migrator-866fcbc849-ntc52\" (UID: \"41b28f5f-47e9-49c2-a0f3-efb26640b87f\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-ntc52" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038947 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8280d23b-d6cf-40ad-996e-d148f43bd0dd-srv-cert\") pod \"catalog-operator-75ff9f647d-c7wsk\" (UID: \"8280d23b-d6cf-40ad-996e-d148f43bd0dd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.038968 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac1adcf2-2577-4d5c-86d8-7c21ccc049fb-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-72qpw\" (UID: \"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.039018 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/abca4105-161b-4d77-9d70-35b13bbcabfd-metrics-tls\") pod \"dns-default-flfnb\" (UID: \"abca4105-161b-4d77-9d70-35b13bbcabfd\") " pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.039045 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qqqqd\" (UniqueName: \"kubernetes.io/projected/ff17a5cc-b922-4d50-8639-eb19d9c97069-kube-api-access-qqqqd\") pod \"machine-config-controller-f9cdd68f7-zbf7n\" (UID: \"ff17a5cc-b922-4d50-8639-eb19d9c97069\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.039081 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.039118 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/072ff625-701b-439b-841f-07ca74f91eee-available-featuregates\") pod \"openshift-config-operator-5777786469-fc65d\" (UID: \"072ff625-701b-439b-841f-07ca74f91eee\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.039148 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7jvct\" (UniqueName: \"kubernetes.io/projected/abca4105-161b-4d77-9d70-35b13bbcabfd-kube-api-access-7jvct\") pod \"dns-default-flfnb\" (UID: \"abca4105-161b-4d77-9d70-35b13bbcabfd\") " pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.039169 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.039842 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8280d23b-d6cf-40ad-996e-d148f43bd0dd-tmpfs\") pod \"catalog-operator-75ff9f647d-c7wsk\" (UID: \"8280d23b-d6cf-40ad-996e-d148f43bd0dd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.039936 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff17a5cc-b922-4d50-8639-eb19d9c97069-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-zbf7n\" (UID: \"ff17a5cc-b922-4d50-8639-eb19d9c97069\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.040351 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/072ff625-701b-439b-841f-07ca74f91eee-available-featuregates\") pod \"openshift-config-operator-5777786469-fc65d\" (UID: \"072ff625-701b-439b-841f-07ca74f91eee\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.040913 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.041334 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.041518 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-tmp\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.041953 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/abc0de56-1146-46a6-8b5b-68373a09ba37-stats-auth\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.042115 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/863750d1-e921-4e7e-b99b-365391af8edf-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.042166 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e867811f-a825-4545-9591-a00087eb4e33-tmp-dir\") pod \"dns-operator-799b87ffcd-f899n\" (UID: \"e867811f-a825-4545-9591-a00087eb4e33\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.044452 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abc0de56-1146-46a6-8b5b-68373a09ba37-metrics-certs\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.044772 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/abc0de56-1146-46a6-8b5b-68373a09ba37-default-certificate\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.060198 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.080664 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.100818 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.120651 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.142253 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.143371 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.643353282 +0000 UTC m=+104.865437631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.174438 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.174540 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.182352 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.201023 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.221047 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.240870 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.243694 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.244114 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.744067621 +0000 UTC m=+104.966151980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.260137 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.280906 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.300325 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.321549 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.339956 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.344614 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.345030 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.845009847 +0000 UTC m=+105.067094216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.359613 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.380694 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.399908 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.420494 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.441179 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.446054 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.446270 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.94624308 +0000 UTC m=+105.168327449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.446414 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.446829 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:04.946812205 +0000 UTC m=+105.168896584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.462185 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.470315 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abca4105-161b-4d77-9d70-35b13bbcabfd-config-volume\") pod \"dns-default-flfnb\" (UID: \"abca4105-161b-4d77-9d70-35b13bbcabfd\") " pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.481659 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.500655 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.513248 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/abca4105-161b-4d77-9d70-35b13bbcabfd-metrics-tls\") pod \"dns-default-flfnb\" (UID: \"abca4105-161b-4d77-9d70-35b13bbcabfd\") " pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.521656 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.531691 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/072ff625-701b-439b-841f-07ca74f91eee-serving-cert\") pod \"openshift-config-operator-5777786469-fc65d\" (UID: \"072ff625-701b-439b-841f-07ca74f91eee\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.540289 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.548054 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.548213 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.048194241 +0000 UTC m=+105.270278600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.548502 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.548892 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.04888174 +0000 UTC m=+105.270966099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.561002 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.581666 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.600189 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.610847 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d1f0610-507b-49ed-9396-89d0fd379fb4-config\") pod \"kube-storage-version-migrator-operator-565b79b866-w87x4\" (UID: \"9d1f0610-507b-49ed-9396-89d0fd379fb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.620550 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.640702 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.649555 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.650216 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.150193915 +0000 UTC m=+105.372278274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.660349 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.679648 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.692468 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d1f0610-507b-49ed-9396-89d0fd379fb4-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-w87x4\" (UID: \"9d1f0610-507b-49ed-9396-89d0fd379fb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.700467 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.713087 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8280d23b-d6cf-40ad-996e-d148f43bd0dd-srv-cert\") pod \"catalog-operator-75ff9f647d-c7wsk\" (UID: \"8280d23b-d6cf-40ad-996e-d148f43bd0dd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.721336 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.733165 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8280d23b-d6cf-40ad-996e-d148f43bd0dd-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-c7wsk\" (UID: \"8280d23b-d6cf-40ad-996e-d148f43bd0dd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.740502 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.751556 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.751963 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.251948212 +0000 UTC m=+105.474032571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.760569 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.775863 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf-serving-cert\") pod \"service-ca-operator-5b9c976747-2vcrx\" (UID: \"526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.780304 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.800745 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.821246 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.829609 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf-config\") pod \"service-ca-operator-5b9c976747-2vcrx\" (UID: \"526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.839485 5110 request.go:752] "Waited before sending request" delay="1.016107072s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-ntnd7&limit=500&resourceVersion=0" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.842791 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.852168 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.852464 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.352442806 +0000 UTC m=+105.574527175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.852719 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.853126 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.353102994 +0000 UTC m=+105.575187393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.861116 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.875353 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.880402 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.901766 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.921310 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.948645 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.951000 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac1adcf2-2577-4d5c-86d8-7c21ccc049fb-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-72qpw\" (UID: \"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.954314 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.954740 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.454701477 +0000 UTC m=+105.676785856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.954936 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.955063 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.955171 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.955244 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.955294 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.955473 5110 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.955512 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.955541 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.955554 5110 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.955610 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.95557873 +0000 UTC m=+121.177663129 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.955712 5110 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.955723 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.955666052 +0000 UTC m=+121.177750451 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.955777 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.955833 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.955805176 +0000 UTC m=+121.177889585 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.955836 5110 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.955876 5110 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.955950 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.955929749 +0000 UTC m=+121.178014148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 14:17:04 crc kubenswrapper[5110]: E0122 14:17:04.956166 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.456150085 +0000 UTC m=+105.678234464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.961144 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.973535 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac1adcf2-2577-4d5c-86d8-7c21ccc049fb-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-72qpw\" (UID: \"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:04 crc kubenswrapper[5110]: I0122 14:17:04.980640 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.000764 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.021480 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.032075 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863750d1-e921-4e7e-b99b-365391af8edf-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.038857 5110 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.038986 5110 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.039199 5110 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.039240 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff17a5cc-b922-4d50-8639-eb19d9c97069-proxy-tls podName:ff17a5cc-b922-4d50-8639-eb19d9c97069 nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.539162547 +0000 UTC m=+105.761246906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/ff17a5cc-b922-4d50-8639-eb19d9c97069-proxy-tls") pod "machine-config-controller-f9cdd68f7-zbf7n" (UID: "ff17a5cc-b922-4d50-8639-eb19d9c97069") : failed to sync secret cache: timed out waiting for the condition Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.039479 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e867811f-a825-4545-9591-a00087eb4e33-metrics-tls podName:e867811f-a825-4545-9591-a00087eb4e33 nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.539454775 +0000 UTC m=+105.761539134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/e867811f-a825-4545-9591-a00087eb4e33-metrics-tls") pod "dns-operator-799b87ffcd-f899n" (UID: "e867811f-a825-4545-9591-a00087eb4e33") : failed to sync secret cache: timed out waiting for the condition Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.039611 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/863750d1-e921-4e7e-b99b-365391af8edf-config podName:863750d1-e921-4e7e-b99b-365391af8edf nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.539596979 +0000 UTC m=+105.761681338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/863750d1-e921-4e7e-b99b-365391af8edf-config") pod "openshift-kube-scheduler-operator-54f497555d-pfd2d" (UID: "863750d1-e921-4e7e-b99b-365391af8edf") : failed to sync configmap cache: timed out waiting for the condition Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.041360 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.057072 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.057322 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs\") pod \"network-metrics-daemon-js5pl\" (UID: \"455fa20f-c1d4-4086-8874-9526d4c4d24d\") " pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.057544 5110 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.057599 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs podName:455fa20f-c1d4-4086-8874-9526d4c4d24d nodeName:}" failed. No retries permitted until 2026-01-22 14:17:21.057584354 +0000 UTC m=+121.279668713 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs") pod "network-metrics-daemon-js5pl" (UID: "455fa20f-c1d4-4086-8874-9526d4c4d24d") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.057856 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.55783361 +0000 UTC m=+105.779917999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.061677 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.080015 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.101804 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.121518 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.140775 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.159028 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.159488 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.659466114 +0000 UTC m=+105.881550493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.161421 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.182044 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.201513 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.222184 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.241070 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.259787 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.259982 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.759954828 +0000 UTC m=+105.982039177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.260296 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.260654 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.760608785 +0000 UTC m=+105.982693144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.260702 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.272973 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.273516 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.273691 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.273933 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.280474 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.300370 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.340813 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.360969 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.361140 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.361273 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.861256053 +0000 UTC m=+106.083340412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.380482 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.402079 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.420093 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.441518 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.460722 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.463107 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.463852 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:05.963827972 +0000 UTC m=+106.185912371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.481372 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.500976 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.520720 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.540788 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.560278 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.564008 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.564141 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.06411649 +0000 UTC m=+106.286200859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.564355 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863750d1-e921-4e7e-b99b-365391af8edf-config\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.564413 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e867811f-a825-4545-9591-a00087eb4e33-metrics-tls\") pod \"dns-operator-799b87ffcd-f899n\" (UID: \"e867811f-a825-4545-9591-a00087eb4e33\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.564450 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff17a5cc-b922-4d50-8639-eb19d9c97069-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-zbf7n\" (UID: \"ff17a5cc-b922-4d50-8639-eb19d9c97069\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.564716 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.565002 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863750d1-e921-4e7e-b99b-365391af8edf-config\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.565048 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.065037504 +0000 UTC m=+106.287121873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.569540 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff17a5cc-b922-4d50-8639-eb19d9c97069-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-zbf7n\" (UID: \"ff17a5cc-b922-4d50-8639-eb19d9c97069\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.570308 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e867811f-a825-4545-9591-a00087eb4e33-metrics-tls\") pod \"dns-operator-799b87ffcd-f899n\" (UID: \"e867811f-a825-4545-9591-a00087eb4e33\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.580561 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.600479 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.620817 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.640444 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.660657 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.666709 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.666939 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.166902084 +0000 UTC m=+106.388986483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.667715 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.668140 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.168128147 +0000 UTC m=+106.390212516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.680773 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.701001 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.720658 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.741003 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.760854 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.768918 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.769144 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.269127654 +0000 UTC m=+106.491212003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.780645 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.800385 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.821286 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.841573 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.858587 5110 request.go:752] "Waited before sending request" delay="1.960976834s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.861609 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.870530 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.870943 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.370925512 +0000 UTC m=+106.593009951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.881481 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.901228 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.939251 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lmz8\" (UniqueName: \"kubernetes.io/projected/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-kube-api-access-9lmz8\") pod \"route-controller-manager-776cdc94d6-w9fgl\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.959988 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xmv6\" (UniqueName: \"kubernetes.io/projected/7af25bf8-c994-4704-821b-ee6df60d64f1-kube-api-access-9xmv6\") pod \"apiserver-9ddfb9f55-knd9m\" (UID: \"7af25bf8-c994-4704-821b-ee6df60d64f1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.971928 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.972064 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.472042792 +0000 UTC m=+106.694127151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.972704 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:05 crc kubenswrapper[5110]: E0122 14:17:05.973051 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.473040679 +0000 UTC m=+106.695125038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.980264 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-28t4b\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-kube-api-access-28t4b\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:05 crc kubenswrapper[5110]: I0122 14:17:05.997390 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s8sd\" (UniqueName: \"kubernetes.io/projected/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-kube-api-access-7s8sd\") pod \"controller-manager-65b6cccf98-qq9vl\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.016122 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2szb\" (UniqueName: \"kubernetes.io/projected/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-kube-api-access-t2szb\") pod \"oauth-openshift-66458b6674-pgzmf\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.034897 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mfcr\" (UniqueName: \"kubernetes.io/projected/a3216b0f-6d90-469f-a674-5bfeb6bafb5c-kube-api-access-4mfcr\") pod \"openshift-apiserver-operator-846cbfc458-w84j9\" (UID: \"a3216b0f-6d90-469f-a674-5bfeb6bafb5c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.054047 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw2j7\" (UniqueName: \"kubernetes.io/projected/0202df03-1e3b-4cb3-a279-a6376a61ac6a-kube-api-access-vw2j7\") pod \"machine-approver-54c688565-rfm46\" (UID: \"0202df03-1e3b-4cb3-a279-a6376a61ac6a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.073355 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.073830 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.57381172 +0000 UTC m=+106.795896079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.075060 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fptcm\" (UniqueName: \"kubernetes.io/projected/5c99795b-25a0-4c75-87ba-3c72c10f621d-kube-api-access-fptcm\") pod \"console-64d44f6ddf-pk2pv\" (UID: \"5c99795b-25a0-4c75-87ba-3c72c10f621d\") " pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.094567 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckv46\" (UniqueName: \"kubernetes.io/projected/e7ad86e7-ae45-45ea-b4f5-ea725569075a-kube-api-access-ckv46\") pod \"machine-api-operator-755bb95488-sxktl\" (UID: \"e7ad86e7-ae45-45ea-b4f5-ea725569075a\") " pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.109030 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.115027 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqvdz\" (UniqueName: \"kubernetes.io/projected/1bdf8fc3-f5ff-4721-b79f-539fec1dabd5-kube-api-access-sqvdz\") pod \"console-operator-67c89758df-s875b\" (UID: \"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5\") " pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.119551 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.135921 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-bound-sa-token\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.136221 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.141511 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.152077 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.154964 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn7nn\" (UniqueName: \"kubernetes.io/projected/3170f172-b46b-4670-94e6-a340749c97e4-kube-api-access-jn7nn\") pod \"authentication-operator-7f5c659b84-j9rvg\" (UID: \"3170f172-b46b-4670-94e6-a340749c97e4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.173293 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.174440 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.175074 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.675059254 +0000 UTC m=+106.897143603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.178812 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.180704 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlprj\" (UniqueName: \"kubernetes.io/projected/e867811f-a825-4545-9591-a00087eb4e33-kube-api-access-mlprj\") pod \"dns-operator-799b87ffcd-f899n\" (UID: \"e867811f-a825-4545-9591-a00087eb4e33\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.198653 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.198971 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863750d1-e921-4e7e-b99b-365391af8edf-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-pfd2d\" (UID: \"863750d1-e921-4e7e-b99b-365391af8edf\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.204350 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.211845 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.222404 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rbnc\" (UniqueName: \"kubernetes.io/projected/abc0de56-1146-46a6-8b5b-68373a09ba37-kube-api-access-5rbnc\") pod \"router-default-68cf44c8b8-qkzl4\" (UID: \"abc0de56-1146-46a6-8b5b-68373a09ba37\") " pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.231333 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.238351 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dkf9\" (UniqueName: \"kubernetes.io/projected/ac1adcf2-2577-4d5c-86d8-7c21ccc049fb-kube-api-access-6dkf9\") pod \"ingress-operator-6b9cb4dbcf-72qpw\" (UID: \"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:06 crc kubenswrapper[5110]: W0122 14:17:06.248006 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc0de56_1146_46a6_8b5b_68373a09ba37.slice/crio-7cda1a2b87fa1000b5483717dae870d9a4abfe3b7de7949b076b94d65a0193ac WatchSource:0}: Error finding container 7cda1a2b87fa1000b5483717dae870d9a4abfe3b7de7949b076b94d65a0193ac: Status 404 returned error can't find the container with id 7cda1a2b87fa1000b5483717dae870d9a4abfe3b7de7949b076b94d65a0193ac Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.261383 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztvnt\" (UniqueName: \"kubernetes.io/projected/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-kube-api-access-ztvnt\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.277611 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.277999 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.777977792 +0000 UTC m=+107.000062151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.279151 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-s46r2\" (UID: \"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.297170 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrktg\" (UniqueName: \"kubernetes.io/projected/9d1f0610-507b-49ed-9396-89d0fd379fb4-kube-api-access-hrktg\") pod \"kube-storage-version-migrator-operator-565b79b866-w87x4\" (UID: \"9d1f0610-507b-49ed-9396-89d0fd379fb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.313155 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.355146 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqqqd\" (UniqueName: \"kubernetes.io/projected/ff17a5cc-b922-4d50-8639-eb19d9c97069-kube-api-access-qqqqd\") pod \"machine-config-controller-f9cdd68f7-zbf7n\" (UID: \"ff17a5cc-b922-4d50-8639-eb19d9c97069\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.371138 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.375285 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qg5g\" (UniqueName: \"kubernetes.io/projected/072ff625-701b-439b-841f-07ca74f91eee-kube-api-access-9qg5g\") pod \"openshift-config-operator-5777786469-fc65d\" (UID: \"072ff625-701b-439b-841f-07ca74f91eee\") " pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.379000 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.379362 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.879349589 +0000 UTC m=+107.101433948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.391030 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.394885 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbsvz\" (UniqueName: \"kubernetes.io/projected/526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf-kube-api-access-mbsvz\") pod \"service-ca-operator-5b9c976747-2vcrx\" (UID: \"526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.407611 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.415931 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcgx2\" (UniqueName: \"kubernetes.io/projected/41b28f5f-47e9-49c2-a0f3-efb26640b87f-kube-api-access-wcgx2\") pod \"migrator-866fcbc849-ntc52\" (UID: \"41b28f5f-47e9-49c2-a0f3-efb26640b87f\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-ntc52" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.423483 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.436010 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jvct\" (UniqueName: \"kubernetes.io/projected/abca4105-161b-4d77-9d70-35b13bbcabfd-kube-api-access-7jvct\") pod \"dns-default-flfnb\" (UID: \"abca4105-161b-4d77-9d70-35b13bbcabfd\") " pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.462700 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.480144 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.480458 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.980426798 +0000 UTC m=+107.202511157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.480533 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.481028 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:06.981011883 +0000 UTC m=+107.203096242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.482232 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.500469 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.521658 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.541265 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.554792 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-ntc52" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.561257 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.582466 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.582609 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.082587526 +0000 UTC m=+107.304671885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.582757 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.583145 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.08313699 +0000 UTC m=+107.305221349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.598018 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.604325 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.664201 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683475 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.683563 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.183541842 +0000 UTC m=+107.405626201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683755 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/117fa2cb-296d-4b02-bd35-be72c9070148-serving-cert\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683798 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a39eb6e9-c23a-4196-89b4-edc51424175a-config\") pod \"kube-controller-manager-operator-69d5f845f8-fpbgg\" (UID: \"a39eb6e9-c23a-4196-89b4-edc51424175a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683826 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cftsn\" (UniqueName: \"kubernetes.io/projected/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-kube-api-access-cftsn\") pod \"marketplace-operator-547dbd544d-74rfw\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683848 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jkcj\" (UniqueName: \"kubernetes.io/projected/f89c668c-680e-4342-9003-c7140b9f5d51-kube-api-access-7jkcj\") pod \"downloads-747b44746d-swx4b\" (UID: \"f89c668c-680e-4342-9003-c7140b9f5d51\") " pod="openshift-console/downloads-747b44746d-swx4b" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683864 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ccc3783f-31af-4d45-bf5f-1403105ce449-encryption-config\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683885 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a39eb6e9-c23a-4196-89b4-edc51424175a-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-fpbgg\" (UID: \"a39eb6e9-c23a-4196-89b4-edc51424175a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683902 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ec0db01-c734-4115-9b06-f28b8912aad3-webhook-cert\") pod \"packageserver-7d4fc7d867-9qczr\" (UID: \"1ec0db01-c734-4115-9b06-f28b8912aad3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683916 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ccc3783f-31af-4d45-bf5f-1403105ce449-etcd-serving-ca\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683932 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccc3783f-31af-4d45-bf5f-1403105ce449-audit-policies\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683958 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f810045f-32aa-488a-a2d7-0a20f8c88429-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-vwl9t\" (UID: \"f810045f-32aa-488a-a2d7-0a20f8c88429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683971 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-tmp\") pod \"marketplace-operator-547dbd544d-74rfw\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.683996 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f810045f-32aa-488a-a2d7-0a20f8c88429-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-vwl9t\" (UID: \"f810045f-32aa-488a-a2d7-0a20f8c88429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684014 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/270f9528-4848-4264-8fdc-93e2ae195ec4-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2m627\" (UID: \"270f9528-4848-4264-8fdc-93e2ae195ec4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684030 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/117fa2cb-296d-4b02-bd35-be72c9070148-etcd-ca\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684052 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2wv\" (UniqueName: \"kubernetes.io/projected/f810045f-32aa-488a-a2d7-0a20f8c88429-kube-api-access-5m2wv\") pod \"machine-config-operator-67c9d58cbb-vwl9t\" (UID: \"f810045f-32aa-488a-a2d7-0a20f8c88429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684065 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a39eb6e9-c23a-4196-89b4-edc51424175a-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-fpbgg\" (UID: \"a39eb6e9-c23a-4196-89b4-edc51424175a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684083 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a39eb6e9-c23a-4196-89b4-edc51424175a-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-fpbgg\" (UID: \"a39eb6e9-c23a-4196-89b4-edc51424175a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684098 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/117fa2cb-296d-4b02-bd35-be72c9070148-etcd-client\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684126 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm22q\" (UniqueName: \"kubernetes.io/projected/1c6d886d-04ca-4651-b1ca-0bda7bee5c7d-kube-api-access-lm22q\") pod \"control-plane-machine-set-operator-75ffdb6fcd-z8ls4\" (UID: \"1c6d886d-04ca-4651-b1ca-0bda7bee5c7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684141 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc3783f-31af-4d45-bf5f-1403105ce449-serving-cert\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684180 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccc3783f-31af-4d45-bf5f-1403105ce449-audit-dir\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684196 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1ec0db01-c734-4115-9b06-f28b8912aad3-tmpfs\") pod \"packageserver-7d4fc7d867-9qczr\" (UID: \"1ec0db01-c734-4115-9b06-f28b8912aad3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684226 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684242 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dd4t\" (UniqueName: \"kubernetes.io/projected/ccc3783f-31af-4d45-bf5f-1403105ce449-kube-api-access-7dd4t\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684264 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4zmg\" (UniqueName: \"kubernetes.io/projected/eaede8c0-4327-4e2c-a850-8101195db984-kube-api-access-z4zmg\") pod \"openshift-controller-manager-operator-686468bdd5-7zqvd\" (UID: \"eaede8c0-4327-4e2c-a850-8101195db984\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684290 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c6d886d-04ca-4651-b1ca-0bda7bee5c7d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-z8ls4\" (UID: \"1c6d886d-04ca-4651-b1ca-0bda7bee5c7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684313 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrfp5\" (UniqueName: \"kubernetes.io/projected/1ec0db01-c734-4115-9b06-f28b8912aad3-kube-api-access-lrfp5\") pod \"packageserver-7d4fc7d867-9qczr\" (UID: \"1ec0db01-c734-4115-9b06-f28b8912aad3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684339 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eaede8c0-4327-4e2c-a850-8101195db984-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7zqvd\" (UID: \"eaede8c0-4327-4e2c-a850-8101195db984\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684362 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/117fa2cb-296d-4b02-bd35-be72c9070148-tmp-dir\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684408 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kk4n\" (UniqueName: \"kubernetes.io/projected/270f9528-4848-4264-8fdc-93e2ae195ec4-kube-api-access-4kk4n\") pod \"package-server-manager-77f986bd66-2m627\" (UID: \"270f9528-4848-4264-8fdc-93e2ae195ec4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684468 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7cxt\" (UniqueName: \"kubernetes.io/projected/117fa2cb-296d-4b02-bd35-be72c9070148-kube-api-access-t7cxt\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684519 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccc3783f-31af-4d45-bf5f-1403105ce449-trusted-ca-bundle\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684542 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ec0db01-c734-4115-9b06-f28b8912aad3-apiservice-cert\") pod \"packageserver-7d4fc7d867-9qczr\" (UID: \"1ec0db01-c734-4115-9b06-f28b8912aad3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684578 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/117fa2cb-296d-4b02-bd35-be72c9070148-config\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684601 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaede8c0-4327-4e2c-a850-8101195db984-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7zqvd\" (UID: \"eaede8c0-4327-4e2c-a850-8101195db984\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684634 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-74rfw\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684670 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/117fa2cb-296d-4b02-bd35-be72c9070148-etcd-service-ca\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684732 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ccc3783f-31af-4d45-bf5f-1403105ce449-etcd-client\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684763 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-74rfw\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684821 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f810045f-32aa-488a-a2d7-0a20f8c88429-images\") pod \"machine-config-operator-67c9d58cbb-vwl9t\" (UID: \"f810045f-32aa-488a-a2d7-0a20f8c88429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.684842 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaede8c0-4327-4e2c-a850-8101195db984-config\") pod \"openshift-controller-manager-operator-686468bdd5-7zqvd\" (UID: \"eaede8c0-4327-4e2c-a850-8101195db984\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.692828 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.192793386 +0000 UTC m=+107.414877755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.739238 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4xlv\" (UniqueName: \"kubernetes.io/projected/8280d23b-d6cf-40ad-996e-d148f43bd0dd-kube-api-access-h4xlv\") pod \"catalog-operator-75ff9f647d-c7wsk\" (UID: \"8280d23b-d6cf-40ad-996e-d148f43bd0dd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.742510 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac1adcf2-2577-4d5c-86d8-7c21ccc049fb-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-72qpw\" (UID: \"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.786484 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.786729 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.286705316 +0000 UTC m=+107.508789675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.786786 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-ready\") pod \"cni-sysctl-allowlist-ds-rjmdz\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.786826 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lrfp5\" (UniqueName: \"kubernetes.io/projected/1ec0db01-c734-4115-9b06-f28b8912aad3-kube-api-access-lrfp5\") pod \"packageserver-7d4fc7d867-9qczr\" (UID: \"1ec0db01-c734-4115-9b06-f28b8912aad3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.786854 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvjr8\" (UniqueName: \"kubernetes.io/projected/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-kube-api-access-tvjr8\") pod \"cni-sysctl-allowlist-ds-rjmdz\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.786877 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d092a4ac-8c7f-4b9b-b62f-503fc9438f57-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-49nzv\" (UID: \"d092a4ac-8c7f-4b9b-b62f-503fc9438f57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.786900 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eaede8c0-4327-4e2c-a850-8101195db984-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7zqvd\" (UID: \"eaede8c0-4327-4e2c-a850-8101195db984\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.786923 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/117fa2cb-296d-4b02-bd35-be72c9070148-tmp-dir\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.786946 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-csi-data-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.786980 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-plugins-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.787003 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4kk4n\" (UniqueName: \"kubernetes.io/projected/270f9528-4848-4264-8fdc-93e2ae195ec4-kube-api-access-4kk4n\") pod \"package-server-manager-77f986bd66-2m627\" (UID: \"270f9528-4848-4264-8fdc-93e2ae195ec4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.787046 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb332b10-a8e9-47fb-9f51-72611b44de2d-webhook-certs\") pod \"multus-admission-controller-69db94689b-hggst\" (UID: \"bb332b10-a8e9-47fb-9f51-72611b44de2d\") " pod="openshift-multus/multus-admission-controller-69db94689b-hggst" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.787071 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ab36073f-d7d7-4b3f-ae1e-70f0eec2847e-srv-cert\") pod \"olm-operator-5cdf44d969-rmjwc\" (UID: \"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.787104 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rjmdz\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.787125 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5de37d4a-279a-45d0-ba01-0749e4b765a0-config-volume\") pod \"collect-profiles-29484855-8xz9c\" (UID: \"5de37d4a-279a-45d0-ba01-0749e4b765a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.787170 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t7cxt\" (UniqueName: \"kubernetes.io/projected/117fa2cb-296d-4b02-bd35-be72c9070148-kube-api-access-t7cxt\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.787760 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eaede8c0-4327-4e2c-a850-8101195db984-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-7zqvd\" (UID: \"eaede8c0-4327-4e2c-a850-8101195db984\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.788074 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/117fa2cb-296d-4b02-bd35-be72c9070148-tmp-dir\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789274 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ab36073f-d7d7-4b3f-ae1e-70f0eec2847e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-rmjwc\" (UID: \"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789336 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gpwp\" (UniqueName: \"kubernetes.io/projected/5de37d4a-279a-45d0-ba01-0749e4b765a0-kube-api-access-8gpwp\") pod \"collect-profiles-29484855-8xz9c\" (UID: \"5de37d4a-279a-45d0-ba01-0749e4b765a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789442 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccc3783f-31af-4d45-bf5f-1403105ce449-trusted-ca-bundle\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789476 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ec0db01-c734-4115-9b06-f28b8912aad3-apiservice-cert\") pod \"packageserver-7d4fc7d867-9qczr\" (UID: \"1ec0db01-c734-4115-9b06-f28b8912aad3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789503 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f08218bc-fe9e-439e-be7e-469c6232350c-tmp-dir\") pod \"kube-apiserver-operator-575994946d-cnmk5\" (UID: \"f08218bc-fe9e-439e-be7e-469c6232350c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789534 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/117fa2cb-296d-4b02-bd35-be72c9070148-config\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789551 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c5be24a2-d28d-43a7-b91e-9e271103e690-signing-key\") pod \"service-ca-74545575db-lvg4h\" (UID: \"c5be24a2-d28d-43a7-b91e-9e271103e690\") " pod="openshift-service-ca/service-ca-74545575db-lvg4h" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789582 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaede8c0-4327-4e2c-a850-8101195db984-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7zqvd\" (UID: \"eaede8c0-4327-4e2c-a850-8101195db984\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789600 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-74rfw\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789633 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp7g8\" (UniqueName: \"kubernetes.io/projected/65eadafe-e3fc-4413-9bc5-8bab4872f395-kube-api-access-hp7g8\") pod \"machine-config-server-4zpj9\" (UID: \"65eadafe-e3fc-4413-9bc5-8bab4872f395\") " pod="openshift-machine-config-operator/machine-config-server-4zpj9" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789654 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/117fa2cb-296d-4b02-bd35-be72c9070148-etcd-service-ca\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789671 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmtxc\" (UniqueName: \"kubernetes.io/projected/819926eb-28d7-4ac4-a9ea-ddfd9751b3b6-kube-api-access-lmtxc\") pod \"ingress-canary-l7rdn\" (UID: \"819926eb-28d7-4ac4-a9ea-ddfd9751b3b6\") " pod="openshift-ingress-canary/ingress-canary-l7rdn" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789703 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29hw8\" (UniqueName: \"kubernetes.io/projected/a1a05965-c2b8-41e0-89d2-973217252f27-kube-api-access-29hw8\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789721 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ab36073f-d7d7-4b3f-ae1e-70f0eec2847e-tmpfs\") pod \"olm-operator-5cdf44d969-rmjwc\" (UID: \"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789747 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-registration-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789774 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ccc3783f-31af-4d45-bf5f-1403105ce449-etcd-client\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789792 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c5be24a2-d28d-43a7-b91e-9e271103e690-signing-cabundle\") pod \"service-ca-74545575db-lvg4h\" (UID: \"c5be24a2-d28d-43a7-b91e-9e271103e690\") " pod="openshift-service-ca/service-ca-74545575db-lvg4h" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789816 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-74rfw\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789832 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08218bc-fe9e-439e-be7e-469c6232350c-config\") pod \"kube-apiserver-operator-575994946d-cnmk5\" (UID: \"f08218bc-fe9e-439e-be7e-469c6232350c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789866 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f810045f-32aa-488a-a2d7-0a20f8c88429-images\") pod \"machine-config-operator-67c9d58cbb-vwl9t\" (UID: \"f810045f-32aa-488a-a2d7-0a20f8c88429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789889 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaede8c0-4327-4e2c-a850-8101195db984-config\") pod \"openshift-controller-manager-operator-686468bdd5-7zqvd\" (UID: \"eaede8c0-4327-4e2c-a850-8101195db984\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789913 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt9xt\" (UniqueName: \"kubernetes.io/projected/c5be24a2-d28d-43a7-b91e-9e271103e690-kube-api-access-kt9xt\") pod \"service-ca-74545575db-lvg4h\" (UID: \"c5be24a2-d28d-43a7-b91e-9e271103e690\") " pod="openshift-service-ca/service-ca-74545575db-lvg4h" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789935 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/65eadafe-e3fc-4413-9bc5-8bab4872f395-node-bootstrap-token\") pod \"machine-config-server-4zpj9\" (UID: \"65eadafe-e3fc-4413-9bc5-8bab4872f395\") " pod="openshift-machine-config-operator/machine-config-server-4zpj9" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789965 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-socket-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.789981 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/65eadafe-e3fc-4413-9bc5-8bab4872f395-certs\") pod \"machine-config-server-4zpj9\" (UID: \"65eadafe-e3fc-4413-9bc5-8bab4872f395\") " pod="openshift-machine-config-operator/machine-config-server-4zpj9" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790041 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/117fa2cb-296d-4b02-bd35-be72c9070148-serving-cert\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790057 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5de37d4a-279a-45d0-ba01-0749e4b765a0-secret-volume\") pod \"collect-profiles-29484855-8xz9c\" (UID: \"5de37d4a-279a-45d0-ba01-0749e4b765a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790116 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a39eb6e9-c23a-4196-89b4-edc51424175a-config\") pod \"kube-controller-manager-operator-69d5f845f8-fpbgg\" (UID: \"a39eb6e9-c23a-4196-89b4-edc51424175a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790131 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/819926eb-28d7-4ac4-a9ea-ddfd9751b3b6-cert\") pod \"ingress-canary-l7rdn\" (UID: \"819926eb-28d7-4ac4-a9ea-ddfd9751b3b6\") " pod="openshift-ingress-canary/ingress-canary-l7rdn" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790172 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cftsn\" (UniqueName: \"kubernetes.io/projected/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-kube-api-access-cftsn\") pod \"marketplace-operator-547dbd544d-74rfw\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790221 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f08218bc-fe9e-439e-be7e-469c6232350c-kube-api-access\") pod \"kube-apiserver-operator-575994946d-cnmk5\" (UID: \"f08218bc-fe9e-439e-be7e-469c6232350c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790251 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7jkcj\" (UniqueName: \"kubernetes.io/projected/f89c668c-680e-4342-9003-c7140b9f5d51-kube-api-access-7jkcj\") pod \"downloads-747b44746d-swx4b\" (UID: \"f89c668c-680e-4342-9003-c7140b9f5d51\") " pod="openshift-console/downloads-747b44746d-swx4b" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790269 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ccc3783f-31af-4d45-bf5f-1403105ce449-encryption-config\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790287 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a39eb6e9-c23a-4196-89b4-edc51424175a-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-fpbgg\" (UID: \"a39eb6e9-c23a-4196-89b4-edc51424175a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790303 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ec0db01-c734-4115-9b06-f28b8912aad3-webhook-cert\") pod \"packageserver-7d4fc7d867-9qczr\" (UID: \"1ec0db01-c734-4115-9b06-f28b8912aad3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790318 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ccc3783f-31af-4d45-bf5f-1403105ce449-etcd-serving-ca\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790334 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvgsr\" (UniqueName: \"kubernetes.io/projected/d092a4ac-8c7f-4b9b-b62f-503fc9438f57-kube-api-access-wvgsr\") pod \"cluster-samples-operator-6b564684c8-49nzv\" (UID: \"d092a4ac-8c7f-4b9b-b62f-503fc9438f57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790372 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccc3783f-31af-4d45-bf5f-1403105ce449-audit-policies\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790390 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08218bc-fe9e-439e-be7e-469c6232350c-serving-cert\") pod \"kube-apiserver-operator-575994946d-cnmk5\" (UID: \"f08218bc-fe9e-439e-be7e-469c6232350c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790432 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f810045f-32aa-488a-a2d7-0a20f8c88429-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-vwl9t\" (UID: \"f810045f-32aa-488a-a2d7-0a20f8c88429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790447 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-tmp\") pod \"marketplace-operator-547dbd544d-74rfw\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790476 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f810045f-32aa-488a-a2d7-0a20f8c88429-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-vwl9t\" (UID: \"f810045f-32aa-488a-a2d7-0a20f8c88429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790492 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/270f9528-4848-4264-8fdc-93e2ae195ec4-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2m627\" (UID: \"270f9528-4848-4264-8fdc-93e2ae195ec4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790507 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/117fa2cb-296d-4b02-bd35-be72c9070148-etcd-ca\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790531 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9q96\" (UniqueName: \"kubernetes.io/projected/ab36073f-d7d7-4b3f-ae1e-70f0eec2847e-kube-api-access-t9q96\") pod \"olm-operator-5cdf44d969-rmjwc\" (UID: \"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790566 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m2wv\" (UniqueName: \"kubernetes.io/projected/f810045f-32aa-488a-a2d7-0a20f8c88429-kube-api-access-5m2wv\") pod \"machine-config-operator-67c9d58cbb-vwl9t\" (UID: \"f810045f-32aa-488a-a2d7-0a20f8c88429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790583 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a39eb6e9-c23a-4196-89b4-edc51424175a-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-fpbgg\" (UID: \"a39eb6e9-c23a-4196-89b4-edc51424175a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.790599 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a39eb6e9-c23a-4196-89b4-edc51424175a-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-fpbgg\" (UID: \"a39eb6e9-c23a-4196-89b4-edc51424175a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.791990 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/117fa2cb-296d-4b02-bd35-be72c9070148-etcd-client\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.792082 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lm22q\" (UniqueName: \"kubernetes.io/projected/1c6d886d-04ca-4651-b1ca-0bda7bee5c7d-kube-api-access-lm22q\") pod \"control-plane-machine-set-operator-75ffdb6fcd-z8ls4\" (UID: \"1c6d886d-04ca-4651-b1ca-0bda7bee5c7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.792114 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc3783f-31af-4d45-bf5f-1403105ce449-serving-cert\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.792146 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-mountpoint-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.792171 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccc3783f-31af-4d45-bf5f-1403105ce449-audit-dir\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.792189 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhp5q\" (UniqueName: \"kubernetes.io/projected/bb332b10-a8e9-47fb-9f51-72611b44de2d-kube-api-access-lhp5q\") pod \"multus-admission-controller-69db94689b-hggst\" (UID: \"bb332b10-a8e9-47fb-9f51-72611b44de2d\") " pod="openshift-multus/multus-admission-controller-69db94689b-hggst" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.792211 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1ec0db01-c734-4115-9b06-f28b8912aad3-tmpfs\") pod \"packageserver-7d4fc7d867-9qczr\" (UID: \"1ec0db01-c734-4115-9b06-f28b8912aad3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.792256 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.792276 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7dd4t\" (UniqueName: \"kubernetes.io/projected/ccc3783f-31af-4d45-bf5f-1403105ce449-kube-api-access-7dd4t\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.792296 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z4zmg\" (UniqueName: \"kubernetes.io/projected/eaede8c0-4327-4e2c-a850-8101195db984-kube-api-access-z4zmg\") pod \"openshift-controller-manager-operator-686468bdd5-7zqvd\" (UID: \"eaede8c0-4327-4e2c-a850-8101195db984\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.792314 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c6d886d-04ca-4651-b1ca-0bda7bee5c7d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-z8ls4\" (UID: \"1c6d886d-04ca-4651-b1ca-0bda7bee5c7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.792336 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rjmdz\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.794049 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccc3783f-31af-4d45-bf5f-1403105ce449-trusted-ca-bundle\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.795201 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/117fa2cb-296d-4b02-bd35-be72c9070148-config\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.798476 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ec0db01-c734-4115-9b06-f28b8912aad3-apiservice-cert\") pod \"packageserver-7d4fc7d867-9qczr\" (UID: \"1ec0db01-c734-4115-9b06-f28b8912aad3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.798862 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1ec0db01-c734-4115-9b06-f28b8912aad3-tmpfs\") pod \"packageserver-7d4fc7d867-9qczr\" (UID: \"1ec0db01-c734-4115-9b06-f28b8912aad3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.799137 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.299124304 +0000 UTC m=+107.521208663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.800390 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ccc3783f-31af-4d45-bf5f-1403105ce449-etcd-client\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.801463 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/117fa2cb-296d-4b02-bd35-be72c9070148-etcd-service-ca\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.801594 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-74rfw\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.802226 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f810045f-32aa-488a-a2d7-0a20f8c88429-images\") pod \"machine-config-operator-67c9d58cbb-vwl9t\" (UID: \"f810045f-32aa-488a-a2d7-0a20f8c88429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.802388 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaede8c0-4327-4e2c-a850-8101195db984-config\") pod \"openshift-controller-manager-operator-686468bdd5-7zqvd\" (UID: \"eaede8c0-4327-4e2c-a850-8101195db984\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.800878 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ccc3783f-31af-4d45-bf5f-1403105ce449-audit-dir\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.802902 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaede8c0-4327-4e2c-a850-8101195db984-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-7zqvd\" (UID: \"eaede8c0-4327-4e2c-a850-8101195db984\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.802937 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-tmp\") pod \"marketplace-operator-547dbd544d-74rfw\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.803512 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a39eb6e9-c23a-4196-89b4-edc51424175a-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-fpbgg\" (UID: \"a39eb6e9-c23a-4196-89b4-edc51424175a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.804118 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ccc3783f-31af-4d45-bf5f-1403105ce449-audit-policies\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.804378 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/117fa2cb-296d-4b02-bd35-be72c9070148-etcd-ca\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.804462 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f810045f-32aa-488a-a2d7-0a20f8c88429-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-vwl9t\" (UID: \"f810045f-32aa-488a-a2d7-0a20f8c88429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.804916 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ccc3783f-31af-4d45-bf5f-1403105ce449-etcd-serving-ca\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.805216 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c6d886d-04ca-4651-b1ca-0bda7bee5c7d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-z8ls4\" (UID: \"1c6d886d-04ca-4651-b1ca-0bda7bee5c7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.806190 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f810045f-32aa-488a-a2d7-0a20f8c88429-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-vwl9t\" (UID: \"f810045f-32aa-488a-a2d7-0a20f8c88429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.802812 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a39eb6e9-c23a-4196-89b4-edc51424175a-config\") pod \"kube-controller-manager-operator-69d5f845f8-fpbgg\" (UID: \"a39eb6e9-c23a-4196-89b4-edc51424175a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.812255 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a39eb6e9-c23a-4196-89b4-edc51424175a-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-fpbgg\" (UID: \"a39eb6e9-c23a-4196-89b4-edc51424175a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.812397 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/117fa2cb-296d-4b02-bd35-be72c9070148-etcd-client\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.813212 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-74rfw\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.815739 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-f899n"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.816875 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.816258 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc3783f-31af-4d45-bf5f-1403105ce449-serving-cert\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.815953 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/117fa2cb-296d-4b02-bd35-be72c9070148-serving-cert\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.819148 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ccc3783f-31af-4d45-bf5f-1403105ce449-encryption-config\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.822142 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ec0db01-c734-4115-9b06-f28b8912aad3-webhook-cert\") pod \"packageserver-7d4fc7d867-9qczr\" (UID: \"1ec0db01-c734-4115-9b06-f28b8912aad3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.827769 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/270f9528-4848-4264-8fdc-93e2ae195ec4-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2m627\" (UID: \"270f9528-4848-4264-8fdc-93e2ae195ec4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.832237 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" event={"ID":"0202df03-1e3b-4cb3-a279-a6376a61ac6a","Type":"ContainerStarted","Data":"ace2cddf645977f36162402e5b023cc8881425d2b74fd8d0a6168dcb22f4afb1"} Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.837107 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.837932 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrfp5\" (UniqueName: \"kubernetes.io/projected/1ec0db01-c734-4115-9b06-f28b8912aad3-kube-api-access-lrfp5\") pod \"packageserver-7d4fc7d867-9qczr\" (UID: \"1ec0db01-c734-4115-9b06-f28b8912aad3\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.839755 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" event={"ID":"abc0de56-1146-46a6-8b5b-68373a09ba37","Type":"ContainerStarted","Data":"7cda1a2b87fa1000b5483717dae870d9a4abfe3b7de7949b076b94d65a0193ac"} Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.841893 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.845253 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.845327 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-s875b"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.848872 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-pk2pv"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.849389 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl"] Jan 22 14:17:06 crc kubenswrapper[5110]: W0122 14:17:06.855297 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode867811f_a825_4545_9591_a00087eb4e33.slice/crio-cc08d8f2dffc658314bdc94a4ed2cca9300dfa2b34d788ed70070cc69ab03daa WatchSource:0}: Error finding container cc08d8f2dffc658314bdc94a4ed2cca9300dfa2b34d788ed70070cc69ab03daa: Status 404 returned error can't find the container with id cc08d8f2dffc658314bdc94a4ed2cca9300dfa2b34d788ed70070cc69ab03daa Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.855346 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.855382 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-qq9vl"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.855582 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-knd9m"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.856322 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kk4n\" (UniqueName: \"kubernetes.io/projected/270f9528-4848-4264-8fdc-93e2ae195ec4-kube-api-access-4kk4n\") pod \"package-server-manager-77f986bd66-2m627\" (UID: \"270f9528-4848-4264-8fdc-93e2ae195ec4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.857328 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7cxt\" (UniqueName: \"kubernetes.io/projected/117fa2cb-296d-4b02-bd35-be72c9070148-kube-api-access-t7cxt\") pod \"etcd-operator-69b85846b6-584j5\" (UID: \"117fa2cb-296d-4b02-bd35-be72c9070148\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.861584 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.861647 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pgzmf"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.862861 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-sxktl"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.864427 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-flfnb"] Jan 22 14:17:06 crc kubenswrapper[5110]: W0122 14:17:06.865908 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d1f0610_507b_49ed_9396_89d0fd379fb4.slice/crio-ffc6b1341b4d50991161e88cb46b71ee52c33792f02810fddea3692c07a24f47 WatchSource:0}: Error finding container ffc6b1341b4d50991161e88cb46b71ee52c33792f02810fddea3692c07a24f47: Status 404 returned error can't find the container with id ffc6b1341b4d50991161e88cb46b71ee52c33792f02810fddea3692c07a24f47 Jan 22 14:17:06 crc kubenswrapper[5110]: W0122 14:17:06.874964 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c99795b_25a0_4c75_87ba_3c72c10f621d.slice/crio-25288d02f017ef5e27baef4bed63f92e75b122b8f65a32c2c0fcc392fc13f6d7 WatchSource:0}: Error finding container 25288d02f017ef5e27baef4bed63f92e75b122b8f65a32c2c0fcc392fc13f6d7: Status 404 returned error can't find the container with id 25288d02f017ef5e27baef4bed63f92e75b122b8f65a32c2c0fcc392fc13f6d7 Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.875157 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.884427 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm22q\" (UniqueName: \"kubernetes.io/projected/1c6d886d-04ca-4651-b1ca-0bda7bee5c7d-kube-api-access-lm22q\") pod \"control-plane-machine-set-operator-75ffdb6fcd-z8ls4\" (UID: \"1c6d886d-04ca-4651-b1ca-0bda7bee5c7d\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4" Jan 22 14:17:06 crc kubenswrapper[5110]: W0122 14:17:06.887002 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff17a5cc_b922_4d50_8639_eb19d9c97069.slice/crio-5aafe8ba90c31b8f02c87ab25e58f0963f95d48792e16769dbe2f7940d0dcf30 WatchSource:0}: Error finding container 5aafe8ba90c31b8f02c87ab25e58f0963f95d48792e16769dbe2f7940d0dcf30: Status 404 returned error can't find the container with id 5aafe8ba90c31b8f02c87ab25e58f0963f95d48792e16769dbe2f7940d0dcf30 Jan 22 14:17:06 crc kubenswrapper[5110]: W0122 14:17:06.888769 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7af25bf8_c994_4704_821b_ee6df60d64f1.slice/crio-638ddc267aa6c5b52ad801b38ea6bf06c0767289bce3a0280ddcd3902bb71194 WatchSource:0}: Error finding container 638ddc267aa6c5b52ad801b38ea6bf06c0767289bce3a0280ddcd3902bb71194: Status 404 returned error can't find the container with id 638ddc267aa6c5b52ad801b38ea6bf06c0767289bce3a0280ddcd3902bb71194 Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.889667 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.890373 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-fc65d"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893040 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893235 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/819926eb-28d7-4ac4-a9ea-ddfd9751b3b6-cert\") pod \"ingress-canary-l7rdn\" (UID: \"819926eb-28d7-4ac4-a9ea-ddfd9751b3b6\") " pod="openshift-ingress-canary/ingress-canary-l7rdn" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893350 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f08218bc-fe9e-439e-be7e-469c6232350c-kube-api-access\") pod \"kube-apiserver-operator-575994946d-cnmk5\" (UID: \"f08218bc-fe9e-439e-be7e-469c6232350c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893381 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wvgsr\" (UniqueName: \"kubernetes.io/projected/d092a4ac-8c7f-4b9b-b62f-503fc9438f57-kube-api-access-wvgsr\") pod \"cluster-samples-operator-6b564684c8-49nzv\" (UID: \"d092a4ac-8c7f-4b9b-b62f-503fc9438f57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893412 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08218bc-fe9e-439e-be7e-469c6232350c-serving-cert\") pod \"kube-apiserver-operator-575994946d-cnmk5\" (UID: \"f08218bc-fe9e-439e-be7e-469c6232350c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893448 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t9q96\" (UniqueName: \"kubernetes.io/projected/ab36073f-d7d7-4b3f-ae1e-70f0eec2847e-kube-api-access-t9q96\") pod \"olm-operator-5cdf44d969-rmjwc\" (UID: \"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893492 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-mountpoint-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893536 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lhp5q\" (UniqueName: \"kubernetes.io/projected/bb332b10-a8e9-47fb-9f51-72611b44de2d-kube-api-access-lhp5q\") pod \"multus-admission-controller-69db94689b-hggst\" (UID: \"bb332b10-a8e9-47fb-9f51-72611b44de2d\") " pod="openshift-multus/multus-admission-controller-69db94689b-hggst" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893567 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rjmdz\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893587 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-ready\") pod \"cni-sysctl-allowlist-ds-rjmdz\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893630 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tvjr8\" (UniqueName: \"kubernetes.io/projected/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-kube-api-access-tvjr8\") pod \"cni-sysctl-allowlist-ds-rjmdz\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893652 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d092a4ac-8c7f-4b9b-b62f-503fc9438f57-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-49nzv\" (UID: \"d092a4ac-8c7f-4b9b-b62f-503fc9438f57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893676 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-csi-data-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893698 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-plugins-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893732 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb332b10-a8e9-47fb-9f51-72611b44de2d-webhook-certs\") pod \"multus-admission-controller-69db94689b-hggst\" (UID: \"bb332b10-a8e9-47fb-9f51-72611b44de2d\") " pod="openshift-multus/multus-admission-controller-69db94689b-hggst" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893757 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ab36073f-d7d7-4b3f-ae1e-70f0eec2847e-srv-cert\") pod \"olm-operator-5cdf44d969-rmjwc\" (UID: \"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893782 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rjmdz\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893805 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5de37d4a-279a-45d0-ba01-0749e4b765a0-config-volume\") pod \"collect-profiles-29484855-8xz9c\" (UID: \"5de37d4a-279a-45d0-ba01-0749e4b765a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893832 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ab36073f-d7d7-4b3f-ae1e-70f0eec2847e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-rmjwc\" (UID: \"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893857 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8gpwp\" (UniqueName: \"kubernetes.io/projected/5de37d4a-279a-45d0-ba01-0749e4b765a0-kube-api-access-8gpwp\") pod \"collect-profiles-29484855-8xz9c\" (UID: \"5de37d4a-279a-45d0-ba01-0749e4b765a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893930 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f08218bc-fe9e-439e-be7e-469c6232350c-tmp-dir\") pod \"kube-apiserver-operator-575994946d-cnmk5\" (UID: \"f08218bc-fe9e-439e-be7e-469c6232350c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893951 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rjmdz\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.893960 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c5be24a2-d28d-43a7-b91e-9e271103e690-signing-key\") pod \"service-ca-74545575db-lvg4h\" (UID: \"c5be24a2-d28d-43a7-b91e-9e271103e690\") " pod="openshift-service-ca/service-ca-74545575db-lvg4h" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.894057 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hp7g8\" (UniqueName: \"kubernetes.io/projected/65eadafe-e3fc-4413-9bc5-8bab4872f395-kube-api-access-hp7g8\") pod \"machine-config-server-4zpj9\" (UID: \"65eadafe-e3fc-4413-9bc5-8bab4872f395\") " pod="openshift-machine-config-operator/machine-config-server-4zpj9" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.894089 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lmtxc\" (UniqueName: \"kubernetes.io/projected/819926eb-28d7-4ac4-a9ea-ddfd9751b3b6-kube-api-access-lmtxc\") pod \"ingress-canary-l7rdn\" (UID: \"819926eb-28d7-4ac4-a9ea-ddfd9751b3b6\") " pod="openshift-ingress-canary/ingress-canary-l7rdn" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.894128 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-29hw8\" (UniqueName: \"kubernetes.io/projected/a1a05965-c2b8-41e0-89d2-973217252f27-kube-api-access-29hw8\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.894147 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ab36073f-d7d7-4b3f-ae1e-70f0eec2847e-tmpfs\") pod \"olm-operator-5cdf44d969-rmjwc\" (UID: \"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.894169 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-registration-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.894201 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c5be24a2-d28d-43a7-b91e-9e271103e690-signing-cabundle\") pod \"service-ca-74545575db-lvg4h\" (UID: \"c5be24a2-d28d-43a7-b91e-9e271103e690\") " pod="openshift-service-ca/service-ca-74545575db-lvg4h" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.894232 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08218bc-fe9e-439e-be7e-469c6232350c-config\") pod \"kube-apiserver-operator-575994946d-cnmk5\" (UID: \"f08218bc-fe9e-439e-be7e-469c6232350c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.894289 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kt9xt\" (UniqueName: \"kubernetes.io/projected/c5be24a2-d28d-43a7-b91e-9e271103e690-kube-api-access-kt9xt\") pod \"service-ca-74545575db-lvg4h\" (UID: \"c5be24a2-d28d-43a7-b91e-9e271103e690\") " pod="openshift-service-ca/service-ca-74545575db-lvg4h" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.894308 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/65eadafe-e3fc-4413-9bc5-8bab4872f395-node-bootstrap-token\") pod \"machine-config-server-4zpj9\" (UID: \"65eadafe-e3fc-4413-9bc5-8bab4872f395\") " pod="openshift-machine-config-operator/machine-config-server-4zpj9" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.894341 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-socket-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.894358 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/65eadafe-e3fc-4413-9bc5-8bab4872f395-certs\") pod \"machine-config-server-4zpj9\" (UID: \"65eadafe-e3fc-4413-9bc5-8bab4872f395\") " pod="openshift-machine-config-operator/machine-config-server-4zpj9" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.894409 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5de37d4a-279a-45d0-ba01-0749e4b765a0-secret-volume\") pod \"collect-profiles-29484855-8xz9c\" (UID: \"5de37d4a-279a-45d0-ba01-0749e4b765a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.895496 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-mountpoint-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.895599 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.395568481 +0000 UTC m=+107.617652900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.896222 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-csi-data-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.896277 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-plugins-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.896278 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-registration-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.896399 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a1a05965-c2b8-41e0-89d2-973217252f27-socket-dir\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.896864 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-ready\") pod \"cni-sysctl-allowlist-ds-rjmdz\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.897281 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5de37d4a-279a-45d0-ba01-0749e4b765a0-config-volume\") pod \"collect-profiles-29484855-8xz9c\" (UID: \"5de37d4a-279a-45d0-ba01-0749e4b765a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.897320 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rjmdz\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.897438 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ab36073f-d7d7-4b3f-ae1e-70f0eec2847e-tmpfs\") pod \"olm-operator-5cdf44d969-rmjwc\" (UID: \"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.898487 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f08218bc-fe9e-439e-be7e-469c6232350c-tmp-dir\") pod \"kube-apiserver-operator-575994946d-cnmk5\" (UID: \"f08218bc-fe9e-439e-be7e-469c6232350c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.899514 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c5be24a2-d28d-43a7-b91e-9e271103e690-signing-cabundle\") pod \"service-ca-74545575db-lvg4h\" (UID: \"c5be24a2-d28d-43a7-b91e-9e271103e690\") " pod="openshift-service-ca/service-ca-74545575db-lvg4h" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.899814 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5de37d4a-279a-45d0-ba01-0749e4b765a0-secret-volume\") pod \"collect-profiles-29484855-8xz9c\" (UID: \"5de37d4a-279a-45d0-ba01-0749e4b765a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.900296 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ab36073f-d7d7-4b3f-ae1e-70f0eec2847e-profile-collector-cert\") pod \"olm-operator-5cdf44d969-rmjwc\" (UID: \"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.900580 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c5be24a2-d28d-43a7-b91e-9e271103e690-signing-key\") pod \"service-ca-74545575db-lvg4h\" (UID: \"c5be24a2-d28d-43a7-b91e-9e271103e690\") " pod="openshift-service-ca/service-ca-74545575db-lvg4h" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.900980 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bb332b10-a8e9-47fb-9f51-72611b44de2d-webhook-certs\") pod \"multus-admission-controller-69db94689b-hggst\" (UID: \"bb332b10-a8e9-47fb-9f51-72611b44de2d\") " pod="openshift-multus/multus-admission-controller-69db94689b-hggst" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.901017 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/65eadafe-e3fc-4413-9bc5-8bab4872f395-certs\") pod \"machine-config-server-4zpj9\" (UID: \"65eadafe-e3fc-4413-9bc5-8bab4872f395\") " pod="openshift-machine-config-operator/machine-config-server-4zpj9" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.901081 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f08218bc-fe9e-439e-be7e-469c6232350c-config\") pod \"kube-apiserver-operator-575994946d-cnmk5\" (UID: \"f08218bc-fe9e-439e-be7e-469c6232350c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.901675 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f08218bc-fe9e-439e-be7e-469c6232350c-serving-cert\") pod \"kube-apiserver-operator-575994946d-cnmk5\" (UID: \"f08218bc-fe9e-439e-be7e-469c6232350c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.902745 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/819926eb-28d7-4ac4-a9ea-ddfd9751b3b6-cert\") pod \"ingress-canary-l7rdn\" (UID: \"819926eb-28d7-4ac4-a9ea-ddfd9751b3b6\") " pod="openshift-ingress-canary/ingress-canary-l7rdn" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.903104 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d092a4ac-8c7f-4b9b-b62f-503fc9438f57-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-49nzv\" (UID: \"d092a4ac-8c7f-4b9b-b62f-503fc9438f57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.903547 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dd4t\" (UniqueName: \"kubernetes.io/projected/ccc3783f-31af-4d45-bf5f-1403105ce449-kube-api-access-7dd4t\") pod \"apiserver-8596bd845d-4tb8q\" (UID: \"ccc3783f-31af-4d45-bf5f-1403105ce449\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.907187 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.908113 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ab36073f-d7d7-4b3f-ae1e-70f0eec2847e-srv-cert\") pod \"olm-operator-5cdf44d969-rmjwc\" (UID: \"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.909497 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/65eadafe-e3fc-4413-9bc5-8bab4872f395-node-bootstrap-token\") pod \"machine-config-server-4zpj9\" (UID: \"65eadafe-e3fc-4413-9bc5-8bab4872f395\") " pod="openshift-machine-config-operator/machine-config-server-4zpj9" Jan 22 14:17:06 crc kubenswrapper[5110]: W0122 14:17:06.911889 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f7f31bb_3b08_490d_8e92_09bb8ce46b18.slice/crio-4e095ecf1f184bf3bfba5e875956d0761bbca73afb7de4fb5edc5c997e0e306d WatchSource:0}: Error finding container 4e095ecf1f184bf3bfba5e875956d0761bbca73afb7de4fb5edc5c997e0e306d: Status 404 returned error can't find the container with id 4e095ecf1f184bf3bfba5e875956d0761bbca73afb7de4fb5edc5c997e0e306d Jan 22 14:17:06 crc kubenswrapper[5110]: W0122 14:17:06.913221 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bdf8fc3_f5ff_4721_b79f_539fec1dabd5.slice/crio-63d8f81c7c3f9824ab7779fcf22f46d5d218c410984b78c7bb82d2f0f7a28be7 WatchSource:0}: Error finding container 63d8f81c7c3f9824ab7779fcf22f46d5d218c410984b78c7bb82d2f0f7a28be7: Status 404 returned error can't find the container with id 63d8f81c7c3f9824ab7779fcf22f46d5d218c410984b78c7bb82d2f0f7a28be7 Jan 22 14:17:06 crc kubenswrapper[5110]: W0122 14:17:06.913780 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa774b1c_ef48_4bd8_a4df_d3b963e547e6.slice/crio-38c1b4321b1bfe5aa7faa9d9538280326ef5670bad832a44cf113f418d9744dd WatchSource:0}: Error finding container 38c1b4321b1bfe5aa7faa9d9538280326ef5670bad832a44cf113f418d9744dd: Status 404 returned error can't find the container with id 38c1b4321b1bfe5aa7faa9d9538280326ef5670bad832a44cf113f418d9744dd Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.917330 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4zmg\" (UniqueName: \"kubernetes.io/projected/eaede8c0-4327-4e2c-a850-8101195db984-kube-api-access-z4zmg\") pod \"openshift-controller-manager-operator-686468bdd5-7zqvd\" (UID: \"eaede8c0-4327-4e2c-a850-8101195db984\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:06 crc kubenswrapper[5110]: W0122 14:17:06.919731 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabca4105_161b_4d77_9d70_35b13bbcabfd.slice/crio-12d0c5012ac40cf12b6df2741975194a4d043f2923852665c448061da53a2d91 WatchSource:0}: Error finding container 12d0c5012ac40cf12b6df2741975194a4d043f2923852665c448061da53a2d91: Status 404 returned error can't find the container with id 12d0c5012ac40cf12b6df2741975194a4d043f2923852665c448061da53a2d91 Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.922150 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:06 crc kubenswrapper[5110]: W0122 14:17:06.924000 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f92c314_2d0f_42f1_97d2_3914c2f2a73c.slice/crio-4b4ffd734f44577eef14666bbec1c50d0fecfc93ddbf7f770891e97237a7dfdd WatchSource:0}: Error finding container 4b4ffd734f44577eef14666bbec1c50d0fecfc93ddbf7f770891e97237a7dfdd: Status 404 returned error can't find the container with id 4b4ffd734f44577eef14666bbec1c50d0fecfc93ddbf7f770891e97237a7dfdd Jan 22 14:17:06 crc kubenswrapper[5110]: W0122 14:17:06.943771 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod526a80ab_1b72_48c5_ae8e_cc5c0fa9f7bf.slice/crio-7a294a5eaa8b7e96a35e98ac55b3efabf7cd13cc328316ebddfa45e15d8705cb WatchSource:0}: Error finding container 7a294a5eaa8b7e96a35e98ac55b3efabf7cd13cc328316ebddfa45e15d8705cb: Status 404 returned error can't find the container with id 7a294a5eaa8b7e96a35e98ac55b3efabf7cd13cc328316ebddfa45e15d8705cb Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.963599 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jkcj\" (UniqueName: \"kubernetes.io/projected/f89c668c-680e-4342-9003-c7140b9f5d51-kube-api-access-7jkcj\") pod \"downloads-747b44746d-swx4b\" (UID: \"f89c668c-680e-4342-9003-c7140b9f5d51\") " pod="openshift-console/downloads-747b44746d-swx4b" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.967839 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-ntc52"] Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.983545 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.984139 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.991324 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m2wv\" (UniqueName: \"kubernetes.io/projected/f810045f-32aa-488a-a2d7-0a20f8c88429-kube-api-access-5m2wv\") pod \"machine-config-operator-67c9d58cbb-vwl9t\" (UID: \"f810045f-32aa-488a-a2d7-0a20f8c88429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:06 crc kubenswrapper[5110]: I0122 14:17:06.997853 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:06 crc kubenswrapper[5110]: E0122 14:17:06.998349 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.498330875 +0000 UTC m=+107.720415234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.007783 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cftsn\" (UniqueName: \"kubernetes.io/projected/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-kube-api-access-cftsn\") pod \"marketplace-operator-547dbd544d-74rfw\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.015075 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.019183 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a39eb6e9-c23a-4196-89b4-edc51424175a-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-fpbgg\" (UID: \"a39eb6e9-c23a-4196-89b4-edc51424175a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.037557 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmtxc\" (UniqueName: \"kubernetes.io/projected/819926eb-28d7-4ac4-a9ea-ddfd9751b3b6-kube-api-access-lmtxc\") pod \"ingress-canary-l7rdn\" (UID: \"819926eb-28d7-4ac4-a9ea-ddfd9751b3b6\") " pod="openshift-ingress-canary/ingress-canary-l7rdn" Jan 22 14:17:07 crc kubenswrapper[5110]: W0122 14:17:07.040446 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41b28f5f_47e9_49c2_a0f3_efb26640b87f.slice/crio-926e09f5f3e4e40c6938ff2b941adcbad2c8ce3639ead91cf04cb4c8eb54f9ec WatchSource:0}: Error finding container 926e09f5f3e4e40c6938ff2b941adcbad2c8ce3639ead91cf04cb4c8eb54f9ec: Status 404 returned error can't find the container with id 926e09f5f3e4e40c6938ff2b941adcbad2c8ce3639ead91cf04cb4c8eb54f9ec Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.064680 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhp5q\" (UniqueName: \"kubernetes.io/projected/bb332b10-a8e9-47fb-9f51-72611b44de2d-kube-api-access-lhp5q\") pod \"multus-admission-controller-69db94689b-hggst\" (UID: \"bb332b10-a8e9-47fb-9f51-72611b44de2d\") " pod="openshift-multus/multus-admission-controller-69db94689b-hggst" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.067581 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-hggst" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.082401 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp7g8\" (UniqueName: \"kubernetes.io/projected/65eadafe-e3fc-4413-9bc5-8bab4872f395-kube-api-access-hp7g8\") pod \"machine-config-server-4zpj9\" (UID: \"65eadafe-e3fc-4413-9bc5-8bab4872f395\") " pod="openshift-machine-config-operator/machine-config-server-4zpj9" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.099144 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:07 crc kubenswrapper[5110]: E0122 14:17:07.099694 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.599494246 +0000 UTC m=+107.821578605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.100639 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:07 crc kubenswrapper[5110]: E0122 14:17:07.101067 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.601049407 +0000 UTC m=+107.823133766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.102494 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9q96\" (UniqueName: \"kubernetes.io/projected/ab36073f-d7d7-4b3f-ae1e-70f0eec2847e-kube-api-access-t9q96\") pod \"olm-operator-5cdf44d969-rmjwc\" (UID: \"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.120029 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f08218bc-fe9e-439e-be7e-469c6232350c-kube-api-access\") pod \"kube-apiserver-operator-575994946d-cnmk5\" (UID: \"f08218bc-fe9e-439e-be7e-469c6232350c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.122994 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-swx4b" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.139722 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-29hw8\" (UniqueName: \"kubernetes.io/projected/a1a05965-c2b8-41e0-89d2-973217252f27-kube-api-access-29hw8\") pod \"csi-hostpathplugin-8jstw\" (UID: \"a1a05965-c2b8-41e0-89d2-973217252f27\") " pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.141304 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.158715 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.164194 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvjr8\" (UniqueName: \"kubernetes.io/projected/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-kube-api-access-tvjr8\") pod \"cni-sysctl-allowlist-ds-rjmdz\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.165558 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.173511 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l7rdn" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.177150 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvgsr\" (UniqueName: \"kubernetes.io/projected/d092a4ac-8c7f-4b9b-b62f-503fc9438f57-kube-api-access-wvgsr\") pod \"cluster-samples-operator-6b564684c8-49nzv\" (UID: \"d092a4ac-8c7f-4b9b-b62f-503fc9438f57\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.182600 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4zpj9" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.184686 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.196197 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt9xt\" (UniqueName: \"kubernetes.io/projected/c5be24a2-d28d-43a7-b91e-9e271103e690-kube-api-access-kt9xt\") pod \"service-ca-74545575db-lvg4h\" (UID: \"c5be24a2-d28d-43a7-b91e-9e271103e690\") " pod="openshift-service-ca/service-ca-74545575db-lvg4h" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.202446 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:07 crc kubenswrapper[5110]: E0122 14:17:07.202807 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.702790554 +0000 UTC m=+107.924874913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.220571 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gpwp\" (UniqueName: \"kubernetes.io/projected/5de37d4a-279a-45d0-ba01-0749e4b765a0-kube-api-access-8gpwp\") pod \"collect-profiles-29484855-8xz9c\" (UID: \"5de37d4a-279a-45d0-ba01-0749e4b765a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.262981 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk"] Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.299713 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.306655 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:07 crc kubenswrapper[5110]: E0122 14:17:07.307231 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.807208602 +0000 UTC m=+108.029292971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.309070 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627"] Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.331905 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.342974 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.362094 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.377729 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-lvg4h" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.386024 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:07 crc kubenswrapper[5110]: W0122 14:17:07.387711 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8280d23b_d6cf_40ad_996e_d148f43bd0dd.slice/crio-fca0928b549422780bece884e59f6dc26e4ab06232fa65792f9662076b22433f WatchSource:0}: Error finding container fca0928b549422780bece884e59f6dc26e4ab06232fa65792f9662076b22433f: Status 404 returned error can't find the container with id fca0928b549422780bece884e59f6dc26e4ab06232fa65792f9662076b22433f Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.393095 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.417126 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:07 crc kubenswrapper[5110]: E0122 14:17:07.417522 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:07.917507104 +0000 UTC m=+108.139591463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.419369 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-584j5"] Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.419760 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8jstw" Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.519477 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:07 crc kubenswrapper[5110]: E0122 14:17:07.526784 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:08.02676599 +0000 UTC m=+108.248850349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:07 crc kubenswrapper[5110]: W0122 14:17:07.584213 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod117fa2cb_296d_4b02_bd35_be72c9070148.slice/crio-3108dd299e2cc3d159b9463661ee70b8361428af653b8346ff2c7f4a144a03fc WatchSource:0}: Error finding container 3108dd299e2cc3d159b9463661ee70b8361428af653b8346ff2c7f4a144a03fc: Status 404 returned error can't find the container with id 3108dd299e2cc3d159b9463661ee70b8361428af653b8346ff2c7f4a144a03fc Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.621438 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4"] Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.621807 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:07 crc kubenswrapper[5110]: E0122 14:17:07.622976 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:08.12295247 +0000 UTC m=+108.345036839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:07 crc kubenswrapper[5110]: W0122 14:17:07.658032 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c6d886d_04ca_4651_b1ca_0bda7bee5c7d.slice/crio-96540a1fbab91775523ec213a683b1aff25fd7822e154bdae486151f23bad7dc WatchSource:0}: Error finding container 96540a1fbab91775523ec213a683b1aff25fd7822e154bdae486151f23bad7dc: Status 404 returned error can't find the container with id 96540a1fbab91775523ec213a683b1aff25fd7822e154bdae486151f23bad7dc Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.721711 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw"] Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.724609 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:07 crc kubenswrapper[5110]: E0122 14:17:07.726448 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:08.226424812 +0000 UTC m=+108.448509191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.830304 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:07 crc kubenswrapper[5110]: E0122 14:17:07.832057 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:08.33202607 +0000 UTC m=+108.554110429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.873882 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4" event={"ID":"1c6d886d-04ca-4651-b1ca-0bda7bee5c7d","Type":"ContainerStarted","Data":"96540a1fbab91775523ec213a683b1aff25fd7822e154bdae486151f23bad7dc"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.876184 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" event={"ID":"117fa2cb-296d-4b02-bd35-be72c9070148","Type":"ContainerStarted","Data":"3108dd299e2cc3d159b9463661ee70b8361428af653b8346ff2c7f4a144a03fc"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.886085 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" event={"ID":"aa774b1c-ef48-4bd8-a4df-d3b963e547e6","Type":"ContainerStarted","Data":"38c1b4321b1bfe5aa7faa9d9538280326ef5670bad832a44cf113f418d9744dd"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.893943 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t"] Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.896141 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" event={"ID":"0202df03-1e3b-4cb3-a279-a6376a61ac6a","Type":"ContainerStarted","Data":"576502c5a15d3b5a4bafb05f588eafdf2db5f2b2ce8e3f27cc3e749b3a297b80"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.900357 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" event={"ID":"e867811f-a825-4545-9591-a00087eb4e33","Type":"ContainerStarted","Data":"cc08d8f2dffc658314bdc94a4ed2cca9300dfa2b34d788ed70070cc69ab03daa"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.901916 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-s875b" event={"ID":"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5","Type":"ContainerStarted","Data":"63d8f81c7c3f9824ab7779fcf22f46d5d218c410984b78c7bb82d2f0f7a28be7"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.902811 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" event={"ID":"abc0de56-1146-46a6-8b5b-68373a09ba37","Type":"ContainerStarted","Data":"feb61aa83fb57443dfe617544c5999a3d831e999594d8947ce5d9c11675f7095"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.903580 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" event={"ID":"8280d23b-d6cf-40ad-996e-d148f43bd0dd","Type":"ContainerStarted","Data":"fca0928b549422780bece884e59f6dc26e4ab06232fa65792f9662076b22433f"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.904370 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-ntc52" event={"ID":"41b28f5f-47e9-49c2-a0f3-efb26640b87f","Type":"ContainerStarted","Data":"926e09f5f3e4e40c6938ff2b941adcbad2c8ce3639ead91cf04cb4c8eb54f9ec"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.907032 5110 generic.go:358] "Generic (PLEG): container finished" podID="072ff625-701b-439b-841f-07ca74f91eee" containerID="3b29415240e7269e6ca916e916ca24539a78f29d3fe7ac6311a26246528c0e7a" exitCode=0 Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.907122 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" event={"ID":"072ff625-701b-439b-841f-07ca74f91eee","Type":"ContainerDied","Data":"3b29415240e7269e6ca916e916ca24539a78f29d3fe7ac6311a26246528c0e7a"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.907144 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" event={"ID":"072ff625-701b-439b-841f-07ca74f91eee","Type":"ContainerStarted","Data":"a0d770a1320e2ae3514ead25aeef257f128ef70bba455ccbb8490b873612827b"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.914823 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" event={"ID":"526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf","Type":"ContainerStarted","Data":"9aa4e530e6858c9c101ea479103885b6a447bfcb81d19fd9df7a2ea4dc1176e3"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.914873 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" event={"ID":"526a80ab-1b72-48c5-ae8e-cc5c0fa9f7bf","Type":"ContainerStarted","Data":"7a294a5eaa8b7e96a35e98ac55b3efabf7cd13cc328316ebddfa45e15d8705cb"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.934711 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:07 crc kubenswrapper[5110]: E0122 14:17:07.934973 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:08.434959499 +0000 UTC m=+108.657043858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.950867 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" event={"ID":"9d1f0610-507b-49ed-9396-89d0fd379fb4","Type":"ContainerStarted","Data":"75e6a6879ed666031face2daf0d7c6ccc3e586cb739cec181c08c2c004ed52d5"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.950903 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" event={"ID":"9d1f0610-507b-49ed-9396-89d0fd379fb4","Type":"ContainerStarted","Data":"ffc6b1341b4d50991161e88cb46b71ee52c33792f02810fddea3692c07a24f47"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.960066 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-pk2pv" event={"ID":"5c99795b-25a0-4c75-87ba-3c72c10f621d","Type":"ContainerStarted","Data":"8935870d7256ead6e201c2dccc4d2e6760b8a34eee35e47c31120d71a93b60f3"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.960112 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-pk2pv" event={"ID":"5c99795b-25a0-4c75-87ba-3c72c10f621d","Type":"ContainerStarted","Data":"25288d02f017ef5e27baef4bed63f92e75b122b8f65a32c2c0fcc392fc13f6d7"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.966658 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" event={"ID":"7f92c314-2d0f-42f1-97d2-3914c2f2a73c","Type":"ContainerStarted","Data":"e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.966716 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" event={"ID":"7f92c314-2d0f-42f1-97d2-3914c2f2a73c","Type":"ContainerStarted","Data":"4b4ffd734f44577eef14666bbec1c50d0fecfc93ddbf7f770891e97237a7dfdd"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.973807 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" event={"ID":"7af25bf8-c994-4704-821b-ee6df60d64f1","Type":"ContainerStarted","Data":"638ddc267aa6c5b52ad801b38ea6bf06c0767289bce3a0280ddcd3902bb71194"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.978999 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" event={"ID":"270f9528-4848-4264-8fdc-93e2ae195ec4","Type":"ContainerStarted","Data":"3339699bd4cd033dec199954a83c6482c2138a2ae7b240cb98d4cfa7e8ce34d7"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.984570 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" event={"ID":"3170f172-b46b-4670-94e6-a340749c97e4","Type":"ContainerStarted","Data":"e8acd71e28ba2c0c36555323536e8fdece1bedfaa5ecbe6d7541865ab7c071ca"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.989051 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-flfnb" event={"ID":"abca4105-161b-4d77-9d70-35b13bbcabfd","Type":"ContainerStarted","Data":"4a0f8ea0519cae2219c9712d7f837e7e5c8febf5c4d33d383ea72d30fd80b741"} Jan 22 14:17:07 crc kubenswrapper[5110]: I0122 14:17:07.989095 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-flfnb" event={"ID":"abca4105-161b-4d77-9d70-35b13bbcabfd","Type":"ContainerStarted","Data":"12d0c5012ac40cf12b6df2741975194a4d043f2923852665c448061da53a2d91"} Jan 22 14:17:08 crc kubenswrapper[5110]: W0122 14:17:08.000848 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf810045f_32aa_488a_a2d7_0a20f8c88429.slice/crio-7cd53dd83d5cbdcc0b4399428b9fe2fc9c09e78422d7869682f459f78a3ac424 WatchSource:0}: Error finding container 7cd53dd83d5cbdcc0b4399428b9fe2fc9c09e78422d7869682f459f78a3ac424: Status 404 returned error can't find the container with id 7cd53dd83d5cbdcc0b4399428b9fe2fc9c09e78422d7869682f459f78a3ac424 Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.005280 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-hggst"] Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.028653 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" event={"ID":"ff17a5cc-b922-4d50-8639-eb19d9c97069","Type":"ContainerStarted","Data":"3685e1827d99326c7797edfb6d60a9255fbfb0dc7b7b057861eb923873ce1fe1"} Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.028702 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" event={"ID":"ff17a5cc-b922-4d50-8639-eb19d9c97069","Type":"ContainerStarted","Data":"5aafe8ba90c31b8f02c87ab25e58f0963f95d48792e16769dbe2f7940d0dcf30"} Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.035356 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:08 crc kubenswrapper[5110]: E0122 14:17:08.035739 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:08.535719849 +0000 UTC m=+108.757804208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.037019 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr"] Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.038858 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" event={"ID":"a3216b0f-6d90-469f-a674-5bfeb6bafb5c","Type":"ContainerStarted","Data":"3e8bdb8ed742edf16800e00f0d59948f86ebcbbe7ff23481ebadeb89eff05748"} Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.038902 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" event={"ID":"a3216b0f-6d90-469f-a674-5bfeb6bafb5c","Type":"ContainerStarted","Data":"cee16b4ec2d2b9db06c450eae5804ff984f61e778eb64dca3d4200ac72788a65"} Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.058419 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" event={"ID":"9f7f31bb-3b08-490d-8e92-09bb8ce46b18","Type":"ContainerStarted","Data":"aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2"} Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.058491 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" event={"ID":"9f7f31bb-3b08-490d-8e92-09bb8ce46b18","Type":"ContainerStarted","Data":"4e095ecf1f184bf3bfba5e875956d0761bbca73afb7de4fb5edc5c997e0e306d"} Jan 22 14:17:08 crc kubenswrapper[5110]: W0122 14:17:08.058938 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ec0db01_c734_4115_9b06_f28b8912aad3.slice/crio-28aeae0364a3f6380493ddd1498c1dd8795e3e413b78b048173638026fd99f73 WatchSource:0}: Error finding container 28aeae0364a3f6380493ddd1498c1dd8795e3e413b78b048173638026fd99f73: Status 404 returned error can't find the container with id 28aeae0364a3f6380493ddd1498c1dd8795e3e413b78b048173638026fd99f73 Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.088344 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" event={"ID":"e7ad86e7-ae45-45ea-b4f5-ea725569075a","Type":"ContainerStarted","Data":"826e8895e51e56070b29b2c156d26bdfcb80bdb15ef3639155b2c4b3d4dad290"} Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.088388 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" event={"ID":"e7ad86e7-ae45-45ea-b4f5-ea725569075a","Type":"ContainerStarted","Data":"d70da0860677b75b2b84ee33a73c36bcfb88bb43eb31cdc8d7f61e5be902ab8c"} Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.096683 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.096952 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.107969 5110 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-qq9vl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.108427 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" podUID="7f92c314-2d0f-42f1-97d2-3914c2f2a73c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.107996 5110 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-w9fgl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.108650 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" podUID="9f7f31bb-3b08-490d-8e92-09bb8ce46b18" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.115221 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" event={"ID":"863750d1-e921-4e7e-b99b-365391af8edf","Type":"ContainerStarted","Data":"fa7755832d082564a16f08e641761c67e7a013315e55a405ea052de84e4536fc"} Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.115266 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" event={"ID":"863750d1-e921-4e7e-b99b-365391af8edf","Type":"ContainerStarted","Data":"bb811342bf8a73047bdce35f5e241541aee8b62757bec9ffe02dce228ee73a5a"} Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.120240 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" event={"ID":"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2","Type":"ContainerStarted","Data":"70dad99c565c0b5b5295593e7caa12fec5f9e7b368986fb3db822fad09bd8a89"} Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.120288 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" event={"ID":"b1a0bd76-a5d8-4aca-84b2-37ec592e8ff2","Type":"ContainerStarted","Data":"6a856551fcaabfe51f0c6d9dc88f37cc67884c5fa840ee16afba44d95284864a"} Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.136975 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:08 crc kubenswrapper[5110]: E0122 14:17:08.137269 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:08.63725383 +0000 UTC m=+108.859338189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:08 crc kubenswrapper[5110]: W0122 14:17:08.171314 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb332b10_a8e9_47fb_9f51_72611b44de2d.slice/crio-63beb21e30d3458c285dba4cf87ae9583062f05a97b695e76a69dd3359836dc9 WatchSource:0}: Error finding container 63beb21e30d3458c285dba4cf87ae9583062f05a97b695e76a69dd3359836dc9: Status 404 returned error can't find the container with id 63beb21e30d3458c285dba4cf87ae9583062f05a97b695e76a69dd3359836dc9 Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.234006 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.243844 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:08 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:08 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:08 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.244001 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.244498 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:08 crc kubenswrapper[5110]: E0122 14:17:08.244755 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:08.744723438 +0000 UTC m=+108.966807807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.245283 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:08 crc kubenswrapper[5110]: E0122 14:17:08.247289 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:08.747272795 +0000 UTC m=+108.969357154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.351529 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:08 crc kubenswrapper[5110]: E0122 14:17:08.351760 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:08.851730984 +0000 UTC m=+109.073815343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.351925 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:08 crc kubenswrapper[5110]: E0122 14:17:08.352419 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:08.852409591 +0000 UTC m=+109.074493950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.428720 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-l7rdn"] Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.453110 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:08 crc kubenswrapper[5110]: E0122 14:17:08.453489 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:08.95347221 +0000 UTC m=+109.175556569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.554585 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:08 crc kubenswrapper[5110]: E0122 14:17:08.554959 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:09.05494582 +0000 UTC m=+109.277030179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.566168 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-s46r2" podStartSLOduration=88.566150546 podStartE2EDuration="1m28.566150546s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:08.562345956 +0000 UTC m=+108.784430325" watchObservedRunningTime="2026-01-22 14:17:08.566150546 +0000 UTC m=+108.788234905" Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.608908 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" podStartSLOduration=88.608888965 podStartE2EDuration="1m28.608888965s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:08.607246591 +0000 UTC m=+108.829330960" watchObservedRunningTime="2026-01-22 14:17:08.608888965 +0000 UTC m=+108.830973324" Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.655474 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:08 crc kubenswrapper[5110]: E0122 14:17:08.656082 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:09.156066981 +0000 UTC m=+109.378151340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.675934 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-pfd2d" podStartSLOduration=88.675917175 podStartE2EDuration="1m28.675917175s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:08.648600553 +0000 UTC m=+108.870684912" watchObservedRunningTime="2026-01-22 14:17:08.675917175 +0000 UTC m=+108.898001534" Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.676035 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podStartSLOduration=88.676032178 podStartE2EDuration="1m28.676032178s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:08.675293378 +0000 UTC m=+108.897377737" watchObservedRunningTime="2026-01-22 14:17:08.676032178 +0000 UTC m=+108.898116537" Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.760033 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:08 crc kubenswrapper[5110]: E0122 14:17:08.760366 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:09.260352054 +0000 UTC m=+109.482436413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.796454 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" podStartSLOduration=88.796433077 podStartE2EDuration="1m28.796433077s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:08.772430293 +0000 UTC m=+108.994514652" watchObservedRunningTime="2026-01-22 14:17:08.796433077 +0000 UTC m=+109.018517436" Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.799428 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c"] Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.834496 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-w84j9" podStartSLOduration=88.834473602 podStartE2EDuration="1m28.834473602s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:08.81397828 +0000 UTC m=+109.036062639" watchObservedRunningTime="2026-01-22 14:17:08.834473602 +0000 UTC m=+109.056557961" Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.839435 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-swx4b"] Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.856543 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc"] Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.866100 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:08 crc kubenswrapper[5110]: E0122 14:17:08.887159 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:09.387120352 +0000 UTC m=+109.609204721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.892741 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg"] Jan 22 14:17:08 crc kubenswrapper[5110]: W0122 14:17:08.930262 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab36073f_d7d7_4b3f_ae1e_70f0eec2847e.slice/crio-2470033028b4e0863470e59119eda3d7bc51d0b7431a594e1983c3cb6bd6d09d WatchSource:0}: Error finding container 2470033028b4e0863470e59119eda3d7bc51d0b7431a594e1983c3cb6bd6d09d: Status 404 returned error can't find the container with id 2470033028b4e0863470e59119eda3d7bc51d0b7431a594e1983c3cb6bd6d09d Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.950484 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5"] Jan 22 14:17:08 crc kubenswrapper[5110]: I0122 14:17:08.990869 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:08 crc kubenswrapper[5110]: E0122 14:17:08.991262 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:09.491247002 +0000 UTC m=+109.713331361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.008923 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-lvg4h"] Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.094001 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:09 crc kubenswrapper[5110]: E0122 14:17:09.094281 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:09.594262282 +0000 UTC m=+109.816346641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.094704 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:09 crc kubenswrapper[5110]: E0122 14:17:09.094975 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:09.59496401 +0000 UTC m=+109.817048369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.151896 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv"] Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.185994 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-lvg4h" event={"ID":"c5be24a2-d28d-43a7-b91e-9e271103e690","Type":"ContainerStarted","Data":"f38ceef1d09a20504ced8cfee65db423654c416c1ff78fc2732be73496a5fea7"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.195399 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q"] Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.195995 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.196373 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" event={"ID":"1ec0db01-c734-4115-9b06-f28b8912aad3","Type":"ContainerStarted","Data":"28aeae0364a3f6380493ddd1498c1dd8795e3e413b78b048173638026fd99f73"} Jan 22 14:17:09 crc kubenswrapper[5110]: E0122 14:17:09.196406 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:09.696387529 +0000 UTC m=+109.918471888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.206579 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" event={"ID":"a39eb6e9-c23a-4196-89b4-edc51424175a","Type":"ContainerStarted","Data":"92edd276472331b1b9307c02def28a0d0c810ae2c6a1e20fe68efafb5abead09"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.209426 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" event={"ID":"f08218bc-fe9e-439e-be7e-469c6232350c","Type":"ContainerStarted","Data":"66b6ce81861db7cf93a86960474a7a5a873ece9d079e06d96ffda66c27d71318"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.213202 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l7rdn" event={"ID":"819926eb-28d7-4ac4-a9ea-ddfd9751b3b6","Type":"ContainerStarted","Data":"8d0b60afe8ca3a178935c6f8aa89020b335dde9303efc730cbc476fbec0cd247"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.214974 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-74rfw"] Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.230933 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-hggst" event={"ID":"bb332b10-a8e9-47fb-9f51-72611b44de2d","Type":"ContainerStarted","Data":"63beb21e30d3458c285dba4cf87ae9583062f05a97b695e76a69dd3359836dc9"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.234531 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:09 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:09 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:09 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.234578 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.250313 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4" event={"ID":"1c6d886d-04ca-4651-b1ca-0bda7bee5c7d","Type":"ContainerStarted","Data":"1f80f0386d1bc0ace96e01fe4d515afd08610a6a0a8b936223a9f5b601137739"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.253970 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" event={"ID":"aa774b1c-ef48-4bd8-a4df-d3b963e547e6","Type":"ContainerStarted","Data":"a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.254667 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.255547 5110 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-pgzmf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.255588 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" podUID="aa774b1c-ef48-4bd8-a4df-d3b963e547e6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.261524 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-swx4b" event={"ID":"f89c668c-680e-4342-9003-c7140b9f5d51","Type":"ContainerStarted","Data":"b1bd2f9267a6ed86055c991765c79afebacf917ee447d3941dd1066c7298f4eb"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.280596 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" event={"ID":"e867811f-a825-4545-9591-a00087eb4e33","Type":"ContainerStarted","Data":"ae6fd15a98cb1451b316063efd4e0586c93724d8c15c1a407dc8841413dd5c3a"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.284427 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-z8ls4" podStartSLOduration=89.284404683 podStartE2EDuration="1m29.284404683s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:09.275757945 +0000 UTC m=+109.497842324" watchObservedRunningTime="2026-01-22 14:17:09.284404683 +0000 UTC m=+109.506489042" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.297427 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:09 crc kubenswrapper[5110]: E0122 14:17:09.298175 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:09.798162077 +0000 UTC m=+110.020246436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.348719 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd"] Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.350425 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-s875b" event={"ID":"1bdf8fc3-f5ff-4721-b79f-539fec1dabd5","Type":"ContainerStarted","Data":"32e0297120e091ed96cefb3c59a0f098345cafd71332b3c9b25fca1c871729ad"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.350671 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.353064 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" podStartSLOduration=89.353044266 podStartE2EDuration="1m29.353044266s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:09.327973424 +0000 UTC m=+109.550057783" watchObservedRunningTime="2026-01-22 14:17:09.353044266 +0000 UTC m=+109.575128625" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.360493 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" event={"ID":"04a3cce4-5dc1-418d-a112-6d9e30fdbc52","Type":"ContainerStarted","Data":"ed142409ae86cd70bec622d0e0f495b9dd898b0a9121d48fc5ad827244c35b99"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.394395 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8jstw"] Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.411183 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:09 crc kubenswrapper[5110]: E0122 14:17:09.412210 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:09.912189918 +0000 UTC m=+110.134274277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.414870 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" event={"ID":"8280d23b-d6cf-40ad-996e-d148f43bd0dd","Type":"ContainerStarted","Data":"4bcf552f1fc866aa4751a96f1d8bb1c568bb3015e909c05fa21987be053b26a4"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.415524 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.416533 5110 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-c7wsk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.416585 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" podUID="8280d23b-d6cf-40ad-996e-d148f43bd0dd" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.444065 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-ntc52" event={"ID":"41b28f5f-47e9-49c2-a0f3-efb26640b87f","Type":"ContainerStarted","Data":"2ac9eaa4e1611bfa28c1450a87c21f68962ed141cd219a26ea4d135a57b862f7"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.470703 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-s875b" podStartSLOduration=89.470686903 podStartE2EDuration="1m29.470686903s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:09.416951484 +0000 UTC m=+109.639035843" watchObservedRunningTime="2026-01-22 14:17:09.470686903 +0000 UTC m=+109.692771262" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.496205 5110 generic.go:358] "Generic (PLEG): container finished" podID="7af25bf8-c994-4704-821b-ee6df60d64f1" containerID="93ea1e9b00ad1b62bfd9ca5c38e794c6a584891864836da87ba6c867d9fbf3bc" exitCode=0 Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.496292 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" event={"ID":"7af25bf8-c994-4704-821b-ee6df60d64f1","Type":"ContainerDied","Data":"93ea1e9b00ad1b62bfd9ca5c38e794c6a584891864836da87ba6c867d9fbf3bc"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.509235 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" event={"ID":"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e","Type":"ContainerStarted","Data":"2470033028b4e0863470e59119eda3d7bc51d0b7431a594e1983c3cb6bd6d09d"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.511111 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" event={"ID":"270f9528-4848-4264-8fdc-93e2ae195ec4","Type":"ContainerStarted","Data":"98fa8a5feddd6da13d45b5e5b05ba1740670ddc9bbfad112a462fe1ed33360b4"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.511910 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" event={"ID":"3170f172-b46b-4670-94e6-a340749c97e4","Type":"ContainerStarted","Data":"0df496227b64072bacea3433fdc9001baaa6e3adf538e578f82ae6c6475a04fd"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.514376 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" event={"ID":"5de37d4a-279a-45d0-ba01-0749e4b765a0","Type":"ContainerStarted","Data":"437e5cd8034e4cc033c254a9b5af69f702909940a763e755cd070086c101ffd8"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.515557 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4zpj9" event={"ID":"65eadafe-e3fc-4413-9bc5-8bab4872f395","Type":"ContainerStarted","Data":"4e0172655aca5349dc48eeed42c004d737aeae1f692fa65e63797c05fc90be7a"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.519421 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:09 crc kubenswrapper[5110]: E0122 14:17:09.519912 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.019899172 +0000 UTC m=+110.241983531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.535466 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" event={"ID":"f810045f-32aa-488a-a2d7-0a20f8c88429","Type":"ContainerStarted","Data":"c5daa08eec94ad8dd40db6b47e970e58062499133f39eeb6ea26fd45beea20e2"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.535786 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" event={"ID":"f810045f-32aa-488a-a2d7-0a20f8c88429","Type":"ContainerStarted","Data":"7cd53dd83d5cbdcc0b4399428b9fe2fc9c09e78422d7869682f459f78a3ac424"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.536179 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" podStartSLOduration=89.536163042 podStartE2EDuration="1m29.536163042s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:09.471284699 +0000 UTC m=+109.693369068" watchObservedRunningTime="2026-01-22 14:17:09.536163042 +0000 UTC m=+109.758247401" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.544373 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" event={"ID":"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb","Type":"ContainerStarted","Data":"d4deff7e552396e212bfd9f727371856062d50dfdf29a43de7684798b809e622"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.544402 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" event={"ID":"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb","Type":"ContainerStarted","Data":"bd3d409e138b52a3f788c5a76c5a72622b3aacfe90fda8b59d7ab156547ca9a5"} Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.557578 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.559503 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.571093 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-j9rvg" podStartSLOduration=89.571080274 podStartE2EDuration="1m29.571080274s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:09.570405246 +0000 UTC m=+109.792489605" watchObservedRunningTime="2026-01-22 14:17:09.571080274 +0000 UTC m=+109.793164633" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.610048 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-pk2pv" podStartSLOduration=89.610033673 podStartE2EDuration="1m29.610033673s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:09.609047287 +0000 UTC m=+109.831131656" watchObservedRunningTime="2026-01-22 14:17:09.610033673 +0000 UTC m=+109.832118032" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.623108 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:09 crc kubenswrapper[5110]: E0122 14:17:09.629962 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.129935918 +0000 UTC m=+110.352020317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.644897 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w87x4" podStartSLOduration=89.644883043 podStartE2EDuration="1m29.644883043s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:09.644204545 +0000 UTC m=+109.866288904" watchObservedRunningTime="2026-01-22 14:17:09.644883043 +0000 UTC m=+109.866967402" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.730525 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:09 crc kubenswrapper[5110]: E0122 14:17:09.730889 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.230876714 +0000 UTC m=+110.452961073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.730899 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-2vcrx" podStartSLOduration=89.730883904 podStartE2EDuration="1m29.730883904s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:09.71254531 +0000 UTC m=+109.934629689" watchObservedRunningTime="2026-01-22 14:17:09.730883904 +0000 UTC m=+109.952968263" Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.832048 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:09 crc kubenswrapper[5110]: E0122 14:17:09.832549 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.332501267 +0000 UTC m=+110.554585636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.832920 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:09 crc kubenswrapper[5110]: E0122 14:17:09.833508 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.333498884 +0000 UTC m=+110.555583243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.935408 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:09 crc kubenswrapper[5110]: E0122 14:17:09.936420 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.436398951 +0000 UTC m=+110.658483310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:09 crc kubenswrapper[5110]: I0122 14:17:09.936469 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:09 crc kubenswrapper[5110]: E0122 14:17:09.936813 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.436778361 +0000 UTC m=+110.658862720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.037099 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:10 crc kubenswrapper[5110]: E0122 14:17:10.037590 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.537568223 +0000 UTC m=+110.759652582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.138672 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:10 crc kubenswrapper[5110]: E0122 14:17:10.139112 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.639096774 +0000 UTC m=+110.861181133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.239528 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:10 crc kubenswrapper[5110]: E0122 14:17:10.239807 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.739750812 +0000 UTC m=+110.961835171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.240202 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.240601 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:10 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:10 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:10 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.240664 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:10 crc kubenswrapper[5110]: E0122 14:17:10.240692 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.740599465 +0000 UTC m=+110.962683824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.342122 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:10 crc kubenswrapper[5110]: E0122 14:17:10.342860 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.842843244 +0000 UTC m=+111.064927603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.351035 5110 patch_prober.go:28] interesting pod/console-operator-67c89758df-s875b container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.351149 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-s875b" podUID="1bdf8fc3-f5ff-4721-b79f-539fec1dabd5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.497277 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:10 crc kubenswrapper[5110]: E0122 14:17:10.497553 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:10.99754145 +0000 UTC m=+111.219625809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.600285 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:10 crc kubenswrapper[5110]: E0122 14:17:10.601051 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:11.101027763 +0000 UTC m=+111.323112122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.704578 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:10 crc kubenswrapper[5110]: E0122 14:17:10.705094 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:11.20507468 +0000 UTC m=+111.427159039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.708026 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4zpj9" event={"ID":"65eadafe-e3fc-4413-9bc5-8bab4872f395","Type":"ContainerStarted","Data":"31aab8bc07191067a5b962d06f423602a3aa3108ebfb1cb6d6fc6a4a7ab77933"} Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.761560 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" event={"ID":"eaede8c0-4327-4e2c-a850-8101195db984","Type":"ContainerStarted","Data":"9778b0f0725f2b92111fcc096a6c97f26f784cfe9516deb1d984145b7b85602a"} Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.792991 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.795731 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8jstw" event={"ID":"a1a05965-c2b8-41e0-89d2-973217252f27","Type":"ContainerStarted","Data":"b92a0bda82ec7bd018b219ed8cf324c80eb56420e0af4a72337a707d9f6cf5ea"} Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.797979 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-flfnb" event={"ID":"abca4105-161b-4d77-9d70-35b13bbcabfd","Type":"ContainerStarted","Data":"38bd86a35463f2aaf9270c78c3311881ff04537fd3bacc6f0578c00c12d95966"} Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.798723 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.806006 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:10 crc kubenswrapper[5110]: E0122 14:17:10.806841 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:11.306816507 +0000 UTC m=+111.528900866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.810702 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" event={"ID":"ff17a5cc-b922-4d50-8639-eb19d9c97069","Type":"ContainerStarted","Data":"204b4cd79afa5d5607b872c600ac468a56a0d40efb4d88cf759966a02027fa64"} Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.814856 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-lvg4h" event={"ID":"c5be24a2-d28d-43a7-b91e-9e271103e690","Type":"ContainerStarted","Data":"0e8ccea52caa9ba0537b36e2b5099a5d02cb1a22e5acf06537f01dd93e014463"} Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.843813 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l7rdn" event={"ID":"819926eb-28d7-4ac4-a9ea-ddfd9751b3b6","Type":"ContainerStarted","Data":"967c02cb7e1f6dc4e7d525af78dbc21fc33415f075e19a6abfff8ced21756e3c"} Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.855936 5110 ???:1] "http: TLS handshake error from 192.168.126.11:50834: no serving certificate available for the kubelet" Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.859533 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-hggst" event={"ID":"bb332b10-a8e9-47fb-9f51-72611b44de2d","Type":"ContainerStarted","Data":"97d0a2f7737661f4f88966e267c680de3046ca487b6b01110a006bc70f355613"} Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.876435 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" event={"ID":"0202df03-1e3b-4cb3-a279-a6376a61ac6a","Type":"ContainerStarted","Data":"aa2737e95272c967312d40fb4a07e41a5dee9f3cc016d1eae8c0e6a73bbce5ab"} Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.880200 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" event={"ID":"3adc63ca-ac54-461a-9a91-10ba0b85fa2b","Type":"ContainerStarted","Data":"c6ce80a45875b11a4779155da6a4d4afd657356c97fb5ee2e766703d40ba6928"} Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.888219 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" event={"ID":"1ec0db01-c734-4115-9b06-f28b8912aad3","Type":"ContainerStarted","Data":"dc6951be1f3d98e6678a4f79b614d95cc558d8fb1096b104d3d23e17fcf5acdf"} Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.889349 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.907862 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:10 crc kubenswrapper[5110]: E0122 14:17:10.910275 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:11.410258579 +0000 UTC m=+111.632342938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.923819 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" event={"ID":"ab36073f-d7d7-4b3f-ae1e-70f0eec2847e","Type":"ContainerStarted","Data":"baf32a176fdba095d989786f5b59a00294b547c4f57e4bad3c4698651b0241d7"} Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.924267 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.925634 5110 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-rmjwc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.925670 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" podUID="ab36073f-d7d7-4b3f-ae1e-70f0eec2847e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.989112 5110 ???:1] "http: TLS handshake error from 192.168.126.11:50838: no serving certificate available for the kubelet" Jan 22 14:17:10 crc kubenswrapper[5110]: I0122 14:17:10.991058 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" event={"ID":"e7ad86e7-ae45-45ea-b4f5-ea725569075a","Type":"ContainerStarted","Data":"82a6ad7073e14ff8288321b4e22f21504c3f2641db1e62ee9997b59e739250f0"} Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.009292 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:11 crc kubenswrapper[5110]: E0122 14:17:11.010420 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:11.510405543 +0000 UTC m=+111.732489902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.013930 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" event={"ID":"ccc3783f-31af-4d45-bf5f-1403105ce449","Type":"ContainerStarted","Data":"d080dc30ef097be73c5f4fe8c0c6c54f8367956914464dd4e339c5e388286348"} Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.059850 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" event={"ID":"117fa2cb-296d-4b02-bd35-be72c9070148","Type":"ContainerStarted","Data":"9f9835da1186e2a8e16054ecda822622de5a618534f2674649a7b477b4cc379a"} Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.106085 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-swx4b" event={"ID":"f89c668c-680e-4342-9003-c7140b9f5d51","Type":"ContainerStarted","Data":"b13d5293ca9d7568783f687724af4f35638a479b10306319d973d39262621e5c"} Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.107162 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-swx4b" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.108641 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-swx4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.108681 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-swx4b" podUID="f89c668c-680e-4342-9003-c7140b9f5d51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.110644 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:11 crc kubenswrapper[5110]: E0122 14:17:11.111332 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:11.611319088 +0000 UTC m=+111.833403447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.116741 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-sxktl" podStartSLOduration=91.116728021 podStartE2EDuration="1m31.116728021s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.116112935 +0000 UTC m=+111.338197304" watchObservedRunningTime="2026-01-22 14:17:11.116728021 +0000 UTC m=+111.338812380" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.116990 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-4zpj9" podStartSLOduration=8.116982548 podStartE2EDuration="8.116982548s" podCreationTimestamp="2026-01-22 14:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.070845209 +0000 UTC m=+111.292929568" watchObservedRunningTime="2026-01-22 14:17:11.116982548 +0000 UTC m=+111.339066907" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.120218 5110 ???:1] "http: TLS handshake error from 192.168.126.11:50854: no serving certificate available for the kubelet" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.126933 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" event={"ID":"04a3cce4-5dc1-418d-a112-6d9e30fdbc52","Type":"ContainerStarted","Data":"1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f"} Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.127336 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.143376 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv" event={"ID":"d092a4ac-8c7f-4b9b-b62f-503fc9438f57","Type":"ContainerStarted","Data":"e8f699c2645ef236170b3454cd8036be14ea8ec8185f41d0b7f5846b044c4800"} Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.163321 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" event={"ID":"072ff625-701b-439b-841f-07ca74f91eee","Type":"ContainerStarted","Data":"7eac4f943ba7cc9b38f03a5708cf8c60f059ab4b56295fc98378d7090ef80101"} Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.191155 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-c7wsk" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.212470 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.214690 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:11 crc kubenswrapper[5110]: E0122 14:17:11.215087 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:11.715072008 +0000 UTC m=+111.937156367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.230876 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-flfnb" podStartSLOduration=8.230859645 podStartE2EDuration="8.230859645s" podCreationTimestamp="2026-01-22 14:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.223275455 +0000 UTC m=+111.445359814" watchObservedRunningTime="2026-01-22 14:17:11.230859645 +0000 UTC m=+111.452943994" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.236386 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:11 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:11 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:11 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.236448 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.270773 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-ntc52" podStartSLOduration=91.270739378 podStartE2EDuration="1m31.270739378s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.267423251 +0000 UTC m=+111.489507610" watchObservedRunningTime="2026-01-22 14:17:11.270739378 +0000 UTC m=+111.492823737" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.298929 5110 ???:1] "http: TLS handshake error from 192.168.126.11:50864: no serving certificate available for the kubelet" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.306769 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" podStartSLOduration=91.306744119 podStartE2EDuration="1m31.306744119s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.301446129 +0000 UTC m=+111.523530488" watchObservedRunningTime="2026-01-22 14:17:11.306744119 +0000 UTC m=+111.528828478" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.316563 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:11 crc kubenswrapper[5110]: E0122 14:17:11.332587 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:11.83255395 +0000 UTC m=+112.054638309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.381565 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-zbf7n" podStartSLOduration=91.381542464 podStartE2EDuration="1m31.381542464s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.341963259 +0000 UTC m=+111.564047618" watchObservedRunningTime="2026-01-22 14:17:11.381542464 +0000 UTC m=+111.603626813" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.384836 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" podStartSLOduration=91.384821611 podStartE2EDuration="1m31.384821611s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.369506896 +0000 UTC m=+111.591591245" watchObservedRunningTime="2026-01-22 14:17:11.384821611 +0000 UTC m=+111.606905970" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.407045 5110 ???:1] "http: TLS handshake error from 192.168.126.11:50876: no serving certificate available for the kubelet" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.425472 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:11 crc kubenswrapper[5110]: E0122 14:17:11.426106 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:11.92608081 +0000 UTC m=+112.148165169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.434492 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-rfm46" podStartSLOduration=91.434474732 podStartE2EDuration="1m31.434474732s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.399830497 +0000 UTC m=+111.621914866" watchObservedRunningTime="2026-01-22 14:17:11.434474732 +0000 UTC m=+111.656559091" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.437517 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" podStartSLOduration=91.437493502 podStartE2EDuration="1m31.437493502s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.433858116 +0000 UTC m=+111.655942485" watchObservedRunningTime="2026-01-22 14:17:11.437493502 +0000 UTC m=+111.659577861" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.507924 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-lvg4h" podStartSLOduration=91.507908031 podStartE2EDuration="1m31.507908031s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.466086477 +0000 UTC m=+111.688170846" watchObservedRunningTime="2026-01-22 14:17:11.507908031 +0000 UTC m=+111.729992390" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.508333 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" podStartSLOduration=91.508329472 podStartE2EDuration="1m31.508329472s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.505168099 +0000 UTC m=+111.727252468" watchObservedRunningTime="2026-01-22 14:17:11.508329472 +0000 UTC m=+111.730413831" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.529502 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:11 crc kubenswrapper[5110]: E0122 14:17:11.530630 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:12.03059889 +0000 UTC m=+112.252683249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.538180 5110 ???:1] "http: TLS handshake error from 192.168.126.11:50886: no serving certificate available for the kubelet" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.593262 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" podStartSLOduration=8.593242715 podStartE2EDuration="8.593242715s" podCreationTimestamp="2026-01-22 14:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.586854776 +0000 UTC m=+111.808939135" watchObservedRunningTime="2026-01-22 14:17:11.593242715 +0000 UTC m=+111.815327074" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.593971 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-l7rdn" podStartSLOduration=8.593965864 podStartE2EDuration="8.593965864s" podCreationTimestamp="2026-01-22 14:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.537067481 +0000 UTC m=+111.759151870" watchObservedRunningTime="2026-01-22 14:17:11.593965864 +0000 UTC m=+111.816050223" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.625398 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" podStartSLOduration=91.62537592300001 podStartE2EDuration="1m31.625375923s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.625175758 +0000 UTC m=+111.847260127" watchObservedRunningTime="2026-01-22 14:17:11.625375923 +0000 UTC m=+111.847460282" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.627906 5110 ???:1] "http: TLS handshake error from 192.168.126.11:50902: no serving certificate available for the kubelet" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.630180 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:11 crc kubenswrapper[5110]: E0122 14:17:11.630557 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:12.1305401 +0000 UTC m=+112.352624459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.655605 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-584j5" podStartSLOduration=91.655582811 podStartE2EDuration="1m31.655582811s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.655474968 +0000 UTC m=+111.877559337" watchObservedRunningTime="2026-01-22 14:17:11.655582811 +0000 UTC m=+111.877667170" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.678665 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.719004 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-swx4b" podStartSLOduration=91.718982674 podStartE2EDuration="1m31.718982674s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:11.716255792 +0000 UTC m=+111.938340181" watchObservedRunningTime="2026-01-22 14:17:11.718982674 +0000 UTC m=+111.941067033" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.723870 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.724029 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.730528 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.730950 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.731904 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:11 crc kubenswrapper[5110]: E0122 14:17:11.732181 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:12.232169412 +0000 UTC m=+112.454253771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.759614 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6z98k"] Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.786709 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6z98k"] Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.786867 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.790246 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.792967 5110 ???:1] "http: TLS handshake error from 192.168.126.11:50908: no serving certificate available for the kubelet" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.834167 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.834317 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5148b9f0-04d6-4dfe-9387-293d146f83c7-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"5148b9f0-04d6-4dfe-9387-293d146f83c7\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.834365 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5148b9f0-04d6-4dfe-9387-293d146f83c7-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"5148b9f0-04d6-4dfe-9387-293d146f83c7\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:17:11 crc kubenswrapper[5110]: E0122 14:17:11.835099 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:12.33507294 +0000 UTC m=+112.557157289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.890271 5110 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-9qczr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": context deadline exceeded" start-of-body= Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.890526 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" podUID="1ec0db01-c734-4115-9b06-f28b8912aad3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": context deadline exceeded" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.935860 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-utilities\") pod \"community-operators-6z98k\" (UID: \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\") " pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.936460 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.936524 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5148b9f0-04d6-4dfe-9387-293d146f83c7-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"5148b9f0-04d6-4dfe-9387-293d146f83c7\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.936556 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6t6k\" (UniqueName: \"kubernetes.io/projected/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-kube-api-access-w6t6k\") pod \"community-operators-6z98k\" (UID: \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\") " pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.936588 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-catalog-content\") pod \"community-operators-6z98k\" (UID: \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\") " pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.936650 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5148b9f0-04d6-4dfe-9387-293d146f83c7-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"5148b9f0-04d6-4dfe-9387-293d146f83c7\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.936822 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5148b9f0-04d6-4dfe-9387-293d146f83c7-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"5148b9f0-04d6-4dfe-9387-293d146f83c7\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:17:11 crc kubenswrapper[5110]: E0122 14:17:11.937232 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:12.437213917 +0000 UTC m=+112.659298276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.967228 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hxzv2"] Jan 22 14:17:11 crc kubenswrapper[5110]: I0122 14:17:11.982425 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5148b9f0-04d6-4dfe-9387-293d146f83c7-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"5148b9f0-04d6-4dfe-9387-293d146f83c7\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.037912 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.038104 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w6t6k\" (UniqueName: \"kubernetes.io/projected/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-kube-api-access-w6t6k\") pod \"community-operators-6z98k\" (UID: \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\") " pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.038132 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-catalog-content\") pod \"community-operators-6z98k\" (UID: \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\") " pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.038255 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-utilities\") pod \"community-operators-6z98k\" (UID: \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\") " pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:17:12 crc kubenswrapper[5110]: E0122 14:17:12.040327 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:12.540293459 +0000 UTC m=+112.762377828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.040493 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-utilities\") pod \"community-operators-6z98k\" (UID: \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\") " pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.041397 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-catalog-content\") pod \"community-operators-6z98k\" (UID: \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\") " pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.076349 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hxzv2"] Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.076400 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rjmdz"] Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.076602 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.077588 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6t6k\" (UniqueName: \"kubernetes.io/projected/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-kube-api-access-w6t6k\") pod \"community-operators-6z98k\" (UID: \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\") " pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.077816 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.081154 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.098750 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.124449 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.152524 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:12 crc kubenswrapper[5110]: E0122 14:17:12.152875 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:12.652858662 +0000 UTC m=+112.874943011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.177735 5110 patch_prober.go:28] interesting pod/console-operator-67c89758df-s875b container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": context deadline exceeded" start-of-body= Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.177817 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-s875b" podUID="1bdf8fc3-f5ff-4721-b79f-539fec1dabd5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": context deadline exceeded" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.182563 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v85l6"] Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.213806 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v85l6"] Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.214025 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.256198 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.256885 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d099c037-9022-46df-8a66-e2856ee9dbd9-catalog-content\") pod \"certified-operators-hxzv2\" (UID: \"d099c037-9022-46df-8a66-e2856ee9dbd9\") " pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.257050 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d099c037-9022-46df-8a66-e2856ee9dbd9-utilities\") pod \"certified-operators-hxzv2\" (UID: \"d099c037-9022-46df-8a66-e2856ee9dbd9\") " pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.257086 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4e8bf1-1091-4c8c-b724-9af50f626c94-utilities\") pod \"community-operators-v85l6\" (UID: \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\") " pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.257142 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2nwh\" (UniqueName: \"kubernetes.io/projected/d099c037-9022-46df-8a66-e2856ee9dbd9-kube-api-access-k2nwh\") pod \"certified-operators-hxzv2\" (UID: \"d099c037-9022-46df-8a66-e2856ee9dbd9\") " pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:17:12 crc kubenswrapper[5110]: E0122 14:17:12.257259 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:12.757211137 +0000 UTC m=+112.979295496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.257318 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z228\" (UniqueName: \"kubernetes.io/projected/ca4e8bf1-1091-4c8c-b724-9af50f626c94-kube-api-access-7z228\") pod \"community-operators-v85l6\" (UID: \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\") " pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.257443 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4e8bf1-1091-4c8c-b724-9af50f626c94-catalog-content\") pod \"community-operators-v85l6\" (UID: \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\") " pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.259234 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:12 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:12 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:12 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.259299 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.309793 5110 generic.go:358] "Generic (PLEG): container finished" podID="5de37d4a-279a-45d0-ba01-0749e4b765a0" containerID="bf437b9f12c8a6edceb5343c3b1c37156b9548bd9767501181b23a4e83ae6c62" exitCode=0 Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.323459 5110 generic.go:358] "Generic (PLEG): container finished" podID="ccc3783f-31af-4d45-bf5f-1403105ce449" containerID="4f896430a1d95c325f36c7bc81010dea188d29bb1afb7a60b06509091ff7d228" exitCode=0 Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.336333 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" event={"ID":"7af25bf8-c994-4704-821b-ee6df60d64f1","Type":"ContainerStarted","Data":"7944da8760bad7475864afaa8f4f50b472e4722a845fac2b3f931f44cd672dcb"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.336406 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" event={"ID":"5de37d4a-279a-45d0-ba01-0749e4b765a0","Type":"ContainerDied","Data":"bf437b9f12c8a6edceb5343c3b1c37156b9548bd9767501181b23a4e83ae6c62"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.336423 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" event={"ID":"a39eb6e9-c23a-4196-89b4-edc51424175a","Type":"ContainerStarted","Data":"45be3185dff7c9322f7e9d1225e67de5a3b74b86362a49dc8f0fba4dedb392fd"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.336433 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" event={"ID":"ccc3783f-31af-4d45-bf5f-1403105ce449","Type":"ContainerDied","Data":"4f896430a1d95c325f36c7bc81010dea188d29bb1afb7a60b06509091ff7d228"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.359277 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rg44w"] Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.359438 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpbgg" podStartSLOduration=92.359418777 podStartE2EDuration="1m32.359418777s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:12.353025748 +0000 UTC m=+112.575110117" watchObservedRunningTime="2026-01-22 14:17:12.359418777 +0000 UTC m=+112.581503136" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.364745 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4e8bf1-1091-4c8c-b724-9af50f626c94-catalog-content\") pod \"community-operators-v85l6\" (UID: \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\") " pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.364885 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d099c037-9022-46df-8a66-e2856ee9dbd9-catalog-content\") pod \"certified-operators-hxzv2\" (UID: \"d099c037-9022-46df-8a66-e2856ee9dbd9\") " pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.364990 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.365070 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d099c037-9022-46df-8a66-e2856ee9dbd9-utilities\") pod \"certified-operators-hxzv2\" (UID: \"d099c037-9022-46df-8a66-e2856ee9dbd9\") " pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.365112 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4e8bf1-1091-4c8c-b724-9af50f626c94-utilities\") pod \"community-operators-v85l6\" (UID: \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\") " pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.365222 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k2nwh\" (UniqueName: \"kubernetes.io/projected/d099c037-9022-46df-8a66-e2856ee9dbd9-kube-api-access-k2nwh\") pod \"certified-operators-hxzv2\" (UID: \"d099c037-9022-46df-8a66-e2856ee9dbd9\") " pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.365261 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7z228\" (UniqueName: \"kubernetes.io/projected/ca4e8bf1-1091-4c8c-b724-9af50f626c94-kube-api-access-7z228\") pod \"community-operators-v85l6\" (UID: \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\") " pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.367165 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4e8bf1-1091-4c8c-b724-9af50f626c94-catalog-content\") pod \"community-operators-v85l6\" (UID: \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\") " pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:17:12 crc kubenswrapper[5110]: E0122 14:17:12.367457 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:12.867435788 +0000 UTC m=+113.089520137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.368068 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d099c037-9022-46df-8a66-e2856ee9dbd9-utilities\") pod \"certified-operators-hxzv2\" (UID: \"d099c037-9022-46df-8a66-e2856ee9dbd9\") " pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.368297 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4e8bf1-1091-4c8c-b724-9af50f626c94-utilities\") pod \"community-operators-v85l6\" (UID: \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\") " pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.368864 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d099c037-9022-46df-8a66-e2856ee9dbd9-catalog-content\") pod \"certified-operators-hxzv2\" (UID: \"d099c037-9022-46df-8a66-e2856ee9dbd9\") " pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.414040 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv" event={"ID":"d092a4ac-8c7f-4b9b-b62f-503fc9438f57","Type":"ContainerStarted","Data":"98812f69bd0cb2ab1827d1e747cea7daba8340c27f1d188fe28be19b2bdde456"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.414212 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv" event={"ID":"d092a4ac-8c7f-4b9b-b62f-503fc9438f57","Type":"ContainerStarted","Data":"11c4ef667a29a57ad651f556d18f78bef91348462920a3738198e4f0d9dd442c"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.414243 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rg44w"] Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.414614 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.422206 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z228\" (UniqueName: \"kubernetes.io/projected/ca4e8bf1-1091-4c8c-b724-9af50f626c94-kube-api-access-7z228\") pod \"community-operators-v85l6\" (UID: \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\") " pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.425122 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" event={"ID":"f810045f-32aa-488a-a2d7-0a20f8c88429","Type":"ContainerStarted","Data":"fec2c54e7103dfcc7bd2d3ff4476d3ad53cb94b42083bd06b442e47544e50984"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.438062 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" event={"ID":"eaede8c0-4327-4e2c-a850-8101195db984","Type":"ContainerStarted","Data":"b1f535bc64cb06dc3445456c7b585071d6609801930024c57eb43e8f90815ef4"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.466334 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.466706 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-49nzv" podStartSLOduration=92.466679939 podStartE2EDuration="1m32.466679939s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:12.443589559 +0000 UTC m=+112.665673918" watchObservedRunningTime="2026-01-22 14:17:12.466679939 +0000 UTC m=+112.688764298" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.466990 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9kbb\" (UniqueName: \"kubernetes.io/projected/4baaed1f-91ae-4249-96ed-11c5a93986ab-kube-api-access-p9kbb\") pod \"certified-operators-rg44w\" (UID: \"4baaed1f-91ae-4249-96ed-11c5a93986ab\") " pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.467218 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4baaed1f-91ae-4249-96ed-11c5a93986ab-utilities\") pod \"certified-operators-rg44w\" (UID: \"4baaed1f-91ae-4249-96ed-11c5a93986ab\") " pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.467242 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4baaed1f-91ae-4249-96ed-11c5a93986ab-catalog-content\") pod \"certified-operators-rg44w\" (UID: \"4baaed1f-91ae-4249-96ed-11c5a93986ab\") " pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.467878 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" event={"ID":"270f9528-4848-4264-8fdc-93e2ae195ec4","Type":"ContainerStarted","Data":"26ae9e6ccccd9f12f2b78a3f02337db82dee8251d075bb4641297ad99a0550e5"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.468523 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2nwh\" (UniqueName: \"kubernetes.io/projected/d099c037-9022-46df-8a66-e2856ee9dbd9-kube-api-access-k2nwh\") pod \"certified-operators-hxzv2\" (UID: \"d099c037-9022-46df-8a66-e2856ee9dbd9\") " pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:17:12 crc kubenswrapper[5110]: E0122 14:17:12.467353 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:12.967334706 +0000 UTC m=+113.189419065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.486044 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" event={"ID":"ac1adcf2-2577-4d5c-86d8-7c21ccc049fb","Type":"ContainerStarted","Data":"a5858429f45d1df216d4e2b4ddb4f4038cd56c7130a7459f995d2bc006a33974"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.503779 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" event={"ID":"f08218bc-fe9e-439e-be7e-469c6232350c","Type":"ContainerStarted","Data":"0da49982f46ff969d0a8186a16088881f30403855f71ef6f11a175fe9d9cb297"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.506713 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-hggst" event={"ID":"bb332b10-a8e9-47fb-9f51-72611b44de2d","Type":"ContainerStarted","Data":"ba7a6caffa6c8cb92390f2cf893806b1019ddaa7834c4bbfc936d2a6b102c2f8"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.512390 5110 ???:1] "http: TLS handshake error from 192.168.126.11:50924: no serving certificate available for the kubelet" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.519392 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-7zqvd" podStartSLOduration=92.51937503 podStartE2EDuration="1m32.51937503s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:12.518390474 +0000 UTC m=+112.740474833" watchObservedRunningTime="2026-01-22 14:17:12.51937503 +0000 UTC m=+112.741459389" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.528056 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-vwl9t" podStartSLOduration=92.528033489 podStartE2EDuration="1m32.528033489s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:12.495221353 +0000 UTC m=+112.717305712" watchObservedRunningTime="2026-01-22 14:17:12.528033489 +0000 UTC m=+112.750117838" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.530589 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" event={"ID":"e867811f-a825-4545-9591-a00087eb4e33","Type":"ContainerStarted","Data":"411a5296d4a27a64e080ab3c65b5c7eabd25aa3c5576b4468613bbdd217fb647"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.544654 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-cnmk5" podStartSLOduration=92.544636698 podStartE2EDuration="1m32.544636698s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:12.539123482 +0000 UTC m=+112.761207851" watchObservedRunningTime="2026-01-22 14:17:12.544636698 +0000 UTC m=+112.766721047" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.570881 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" event={"ID":"3adc63ca-ac54-461a-9a91-10ba0b85fa2b","Type":"ContainerStarted","Data":"9531453046d5270ca61029f306a797316355364b48f372a98cccf355e8005f9e"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.571863 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.574323 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.574388 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p9kbb\" (UniqueName: \"kubernetes.io/projected/4baaed1f-91ae-4249-96ed-11c5a93986ab-kube-api-access-p9kbb\") pod \"certified-operators-rg44w\" (UID: \"4baaed1f-91ae-4249-96ed-11c5a93986ab\") " pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.574409 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4baaed1f-91ae-4249-96ed-11c5a93986ab-utilities\") pod \"certified-operators-rg44w\" (UID: \"4baaed1f-91ae-4249-96ed-11c5a93986ab\") " pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.575449 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:12 crc kubenswrapper[5110]: E0122 14:17:12.586951 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:13.086909134 +0000 UTC m=+113.308993493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.590448 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4baaed1f-91ae-4249-96ed-11c5a93986ab-catalog-content\") pod \"certified-operators-rg44w\" (UID: \"4baaed1f-91ae-4249-96ed-11c5a93986ab\") " pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.591048 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4baaed1f-91ae-4249-96ed-11c5a93986ab-utilities\") pod \"certified-operators-rg44w\" (UID: \"4baaed1f-91ae-4249-96ed-11c5a93986ab\") " pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.592779 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4baaed1f-91ae-4249-96ed-11c5a93986ab-catalog-content\") pod \"certified-operators-rg44w\" (UID: \"4baaed1f-91ae-4249-96ed-11c5a93986ab\") " pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.598224 5110 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-74rfw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.598299 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" podUID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.619161 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9kbb\" (UniqueName: \"kubernetes.io/projected/4baaed1f-91ae-4249-96ed-11c5a93986ab-kube-api-access-p9kbb\") pod \"certified-operators-rg44w\" (UID: \"4baaed1f-91ae-4249-96ed-11c5a93986ab\") " pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.620177 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.622440 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-72qpw" podStartSLOduration=92.622415492 podStartE2EDuration="1m32.622415492s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:12.592189973 +0000 UTC m=+112.814274332" watchObservedRunningTime="2026-01-22 14:17:12.622415492 +0000 UTC m=+112.844499841" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.657429 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" podStartSLOduration=92.657412586 podStartE2EDuration="1m32.657412586s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:12.657097707 +0000 UTC m=+112.879182076" watchObservedRunningTime="2026-01-22 14:17:12.657412586 +0000 UTC m=+112.879496945" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.664791 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-ntc52" event={"ID":"41b28f5f-47e9-49c2-a0f3-efb26640b87f","Type":"ContainerStarted","Data":"c8d75990babe018b7aa84ab57b12ed472a5e6fbf5d61f9eeed6ae78e1897a7cf"} Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.680910 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-hggst" podStartSLOduration=92.680881995 podStartE2EDuration="1m32.680881995s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:12.62121872 +0000 UTC m=+112.843303099" watchObservedRunningTime="2026-01-22 14:17:12.680881995 +0000 UTC m=+112.902966374" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.684712 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-swx4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.684772 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-swx4b" podUID="f89c668c-680e-4342-9003-c7140b9f5d51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.685277 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.692856 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:12 crc kubenswrapper[5110]: E0122 14:17:12.694345 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:13.194325271 +0000 UTC m=+113.416409620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.715460 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-rmjwc" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.717822 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-f899n" podStartSLOduration=92.71779948 podStartE2EDuration="1m32.71779948s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:12.716015463 +0000 UTC m=+112.938099822" watchObservedRunningTime="2026-01-22 14:17:12.71779948 +0000 UTC m=+112.939883839" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.776139 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.797588 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:12 crc kubenswrapper[5110]: E0122 14:17:12.800274 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:13.300257518 +0000 UTC m=+113.522341877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.904061 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:12 crc kubenswrapper[5110]: E0122 14:17:12.904314 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:13.404295945 +0000 UTC m=+113.626380304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.904577 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:12 crc kubenswrapper[5110]: E0122 14:17:12.905107 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:13.405099357 +0000 UTC m=+113.627183716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:12 crc kubenswrapper[5110]: I0122 14:17:12.907318 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-9qczr" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.006446 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:13 crc kubenswrapper[5110]: E0122 14:17:13.006762 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:13.506745781 +0000 UTC m=+113.728830130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.107575 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:13 crc kubenswrapper[5110]: E0122 14:17:13.107920 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:13.607908373 +0000 UTC m=+113.829992732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.150552 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6z98k"] Jan 22 14:17:13 crc kubenswrapper[5110]: W0122 14:17:13.179538 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode19a5f1e_d63d_47ef_bfd6_5297b60f2fd7.slice/crio-acf6aafbe23ef261df8f336de8015dd816f051a483c716c7be4047c618161c95 WatchSource:0}: Error finding container acf6aafbe23ef261df8f336de8015dd816f051a483c716c7be4047c618161c95: Status 404 returned error can't find the container with id acf6aafbe23ef261df8f336de8015dd816f051a483c716c7be4047c618161c95 Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.208559 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:13 crc kubenswrapper[5110]: E0122 14:17:13.208866 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:13.708849538 +0000 UTC m=+113.930933887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.261179 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:13 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:13 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:13 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.261240 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.279930 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.313507 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:13 crc kubenswrapper[5110]: E0122 14:17:13.313913 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:13.813894362 +0000 UTC m=+114.035978721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.414849 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:13 crc kubenswrapper[5110]: E0122 14:17:13.415267 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:13.915245209 +0000 UTC m=+114.137329568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.504602 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hxzv2"] Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.519757 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:13 crc kubenswrapper[5110]: E0122 14:17:13.520955 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:14.02093733 +0000 UTC m=+114.243021689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.582720 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-fc65d" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.624185 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:13 crc kubenswrapper[5110]: E0122 14:17:13.624506 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:14.124489084 +0000 UTC m=+114.346573443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.741368 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:13 crc kubenswrapper[5110]: E0122 14:17:13.741680 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:14.241667339 +0000 UTC m=+114.463751698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.742631 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzv2" event={"ID":"d099c037-9022-46df-8a66-e2856ee9dbd9","Type":"ContainerStarted","Data":"2b025fbd8fea60ff561b1a8310849e98f2899c01e017eb98563e2ba254819f21"} Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.769083 5110 generic.go:358] "Generic (PLEG): container finished" podID="e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" containerID="7111ab1340e5e3aff066db4d409b8ded63457cc7666e6f78fc6b8b7f3802ec04" exitCode=0 Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.769210 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6z98k" event={"ID":"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7","Type":"ContainerDied","Data":"7111ab1340e5e3aff066db4d409b8ded63457cc7666e6f78fc6b8b7f3802ec04"} Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.769235 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6z98k" event={"ID":"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7","Type":"ContainerStarted","Data":"acf6aafbe23ef261df8f336de8015dd816f051a483c716c7be4047c618161c95"} Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.769830 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xk6fr"] Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.789375 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk6fr"] Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.789529 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.794153 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.803411 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" event={"ID":"7af25bf8-c994-4704-821b-ee6df60d64f1","Type":"ContainerStarted","Data":"3658fde9263c536d7f0241ca11821466f0b8d3e7d34232234e50c5fc3b00e084"} Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.819218 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" event={"ID":"ccc3783f-31af-4d45-bf5f-1403105ce449","Type":"ContainerStarted","Data":"9717cdbded394f251b21fb5b4407f6f9008b4da7e3276bfdeb989d4a7e534c3e"} Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.841198 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"5148b9f0-04d6-4dfe-9387-293d146f83c7","Type":"ContainerStarted","Data":"6a109e0c6917aa4a075a22f7fe800bdd537ca01e8deaf72bbe0c92b6aa9f7000"} Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.841755 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-swx4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.841858 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-swx4b" podUID="f89c668c-680e-4342-9003-c7140b9f5d51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.843008 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:13 crc kubenswrapper[5110]: E0122 14:17:13.843403 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:14.343361445 +0000 UTC m=+114.565445804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.843512 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" podUID="04a3cce4-5dc1-418d-a112-6d9e30fdbc52" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" gracePeriod=30 Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.851934 5110 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-74rfw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.852002 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" podUID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.876270 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v85l6"] Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.898513 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" podStartSLOduration=93.898494361 podStartE2EDuration="1m33.898494361s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:13.88711681 +0000 UTC m=+114.109201169" watchObservedRunningTime="2026-01-22 14:17:13.898494361 +0000 UTC m=+114.120578720" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.921476 5110 ???:1] "http: TLS handshake error from 192.168.126.11:50928: no serving certificate available for the kubelet" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.948832 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6xz\" (UniqueName: \"kubernetes.io/projected/0c5d2009-8141-4816-b0d1-350eaee192ef-kube-api-access-xn6xz\") pod \"redhat-marketplace-xk6fr\" (UID: \"0c5d2009-8141-4816-b0d1-350eaee192ef\") " pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.949097 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2009-8141-4816-b0d1-350eaee192ef-catalog-content\") pod \"redhat-marketplace-xk6fr\" (UID: \"0c5d2009-8141-4816-b0d1-350eaee192ef\") " pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.949516 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.949668 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2009-8141-4816-b0d1-350eaee192ef-utilities\") pod \"redhat-marketplace-xk6fr\" (UID: \"0c5d2009-8141-4816-b0d1-350eaee192ef\") " pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:17:13 crc kubenswrapper[5110]: I0122 14:17:13.954669 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" podStartSLOduration=93.954641813 podStartE2EDuration="1m33.954641813s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:13.948048319 +0000 UTC m=+114.170132688" watchObservedRunningTime="2026-01-22 14:17:13.954641813 +0000 UTC m=+114.176726172" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:13.989430 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rg44w"] Jan 22 14:17:14 crc kubenswrapper[5110]: E0122 14:17:13.992183 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:14.492152174 +0000 UTC m=+114.714236533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.051500 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.051661 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2009-8141-4816-b0d1-350eaee192ef-utilities\") pod \"redhat-marketplace-xk6fr\" (UID: \"0c5d2009-8141-4816-b0d1-350eaee192ef\") " pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.051704 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6xz\" (UniqueName: \"kubernetes.io/projected/0c5d2009-8141-4816-b0d1-350eaee192ef-kube-api-access-xn6xz\") pod \"redhat-marketplace-xk6fr\" (UID: \"0c5d2009-8141-4816-b0d1-350eaee192ef\") " pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.051791 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2009-8141-4816-b0d1-350eaee192ef-catalog-content\") pod \"redhat-marketplace-xk6fr\" (UID: \"0c5d2009-8141-4816-b0d1-350eaee192ef\") " pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:17:14 crc kubenswrapper[5110]: E0122 14:17:14.051826 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:14.551799049 +0000 UTC m=+114.773883408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.052711 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2009-8141-4816-b0d1-350eaee192ef-catalog-content\") pod \"redhat-marketplace-xk6fr\" (UID: \"0c5d2009-8141-4816-b0d1-350eaee192ef\") " pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.056127 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2009-8141-4816-b0d1-350eaee192ef-utilities\") pod \"redhat-marketplace-xk6fr\" (UID: \"0c5d2009-8141-4816-b0d1-350eaee192ef\") " pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.108126 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6xz\" (UniqueName: \"kubernetes.io/projected/0c5d2009-8141-4816-b0d1-350eaee192ef-kube-api-access-xn6xz\") pod \"redhat-marketplace-xk6fr\" (UID: \"0c5d2009-8141-4816-b0d1-350eaee192ef\") " pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.148236 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-trs4m"] Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.155456 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:14 crc kubenswrapper[5110]: E0122 14:17:14.155821 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:14.655809265 +0000 UTC m=+114.877893624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.170957 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.174906 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-trs4m"] Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.175080 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.260102 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:14 crc kubenswrapper[5110]: E0122 14:17:14.260512 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:14.76049483 +0000 UTC m=+114.982579189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.274913 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:14 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:14 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:14 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.274972 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.365531 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpmjm\" (UniqueName: \"kubernetes.io/projected/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-kube-api-access-wpmjm\") pod \"redhat-marketplace-trs4m\" (UID: \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\") " pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.366144 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.366233 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-utilities\") pod \"redhat-marketplace-trs4m\" (UID: \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\") " pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.366265 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-catalog-content\") pod \"redhat-marketplace-trs4m\" (UID: \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\") " pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:17:14 crc kubenswrapper[5110]: E0122 14:17:14.366552 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:14.86653786 +0000 UTC m=+115.088622219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.462057 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.467991 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5de37d4a-279a-45d0-ba01-0749e4b765a0-config-volume\") pod \"5de37d4a-279a-45d0-ba01-0749e4b765a0\" (UID: \"5de37d4a-279a-45d0-ba01-0749e4b765a0\") " Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.468212 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.468292 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5de37d4a-279a-45d0-ba01-0749e4b765a0-secret-volume\") pod \"5de37d4a-279a-45d0-ba01-0749e4b765a0\" (UID: \"5de37d4a-279a-45d0-ba01-0749e4b765a0\") " Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.468313 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gpwp\" (UniqueName: \"kubernetes.io/projected/5de37d4a-279a-45d0-ba01-0749e4b765a0-kube-api-access-8gpwp\") pod \"5de37d4a-279a-45d0-ba01-0749e4b765a0\" (UID: \"5de37d4a-279a-45d0-ba01-0749e4b765a0\") " Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.468441 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-utilities\") pod \"redhat-marketplace-trs4m\" (UID: \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\") " pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.468476 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-catalog-content\") pod \"redhat-marketplace-trs4m\" (UID: \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\") " pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.468501 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wpmjm\" (UniqueName: \"kubernetes.io/projected/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-kube-api-access-wpmjm\") pod \"redhat-marketplace-trs4m\" (UID: \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\") " pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.470366 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5de37d4a-279a-45d0-ba01-0749e4b765a0-config-volume" (OuterVolumeSpecName: "config-volume") pod "5de37d4a-279a-45d0-ba01-0749e4b765a0" (UID: "5de37d4a-279a-45d0-ba01-0749e4b765a0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:17:14 crc kubenswrapper[5110]: E0122 14:17:14.470496 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:14.970471815 +0000 UTC m=+115.192556224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.470971 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-utilities\") pod \"redhat-marketplace-trs4m\" (UID: \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\") " pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.471023 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-catalog-content\") pod \"redhat-marketplace-trs4m\" (UID: \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\") " pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.495729 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de37d4a-279a-45d0-ba01-0749e4b765a0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5de37d4a-279a-45d0-ba01-0749e4b765a0" (UID: "5de37d4a-279a-45d0-ba01-0749e4b765a0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.509770 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de37d4a-279a-45d0-ba01-0749e4b765a0-kube-api-access-8gpwp" (OuterVolumeSpecName: "kube-api-access-8gpwp") pod "5de37d4a-279a-45d0-ba01-0749e4b765a0" (UID: "5de37d4a-279a-45d0-ba01-0749e4b765a0"). InnerVolumeSpecName "kube-api-access-8gpwp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.529787 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpmjm\" (UniqueName: \"kubernetes.io/projected/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-kube-api-access-wpmjm\") pod \"redhat-marketplace-trs4m\" (UID: \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\") " pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.551026 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.569494 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.569577 5110 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5de37d4a-279a-45d0-ba01-0749e4b765a0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.569590 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8gpwp\" (UniqueName: \"kubernetes.io/projected/5de37d4a-279a-45d0-ba01-0749e4b765a0-kube-api-access-8gpwp\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.569599 5110 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5de37d4a-279a-45d0-ba01-0749e4b765a0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:14 crc kubenswrapper[5110]: E0122 14:17:14.569916 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.069901021 +0000 UTC m=+115.291985380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.625086 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk6fr"] Jan 22 14:17:14 crc kubenswrapper[5110]: W0122 14:17:14.659745 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c5d2009_8141_4816_b0d1_350eaee192ef.slice/crio-98fe0a7d55c41ed49d42e70404f635109a7d01ae183d1c47ec9d8c7ae25e85c3 WatchSource:0}: Error finding container 98fe0a7d55c41ed49d42e70404f635109a7d01ae183d1c47ec9d8c7ae25e85c3: Status 404 returned error can't find the container with id 98fe0a7d55c41ed49d42e70404f635109a7d01ae183d1c47ec9d8c7ae25e85c3 Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.670181 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:14 crc kubenswrapper[5110]: E0122 14:17:14.670649 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.170606991 +0000 UTC m=+115.392691350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.772489 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:14 crc kubenswrapper[5110]: E0122 14:17:14.772902 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.272887102 +0000 UTC m=+115.494971461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.851398 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk6fr" event={"ID":"0c5d2009-8141-4816-b0d1-350eaee192ef","Type":"ContainerStarted","Data":"98fe0a7d55c41ed49d42e70404f635109a7d01ae183d1c47ec9d8c7ae25e85c3"} Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.852855 5110 generic.go:358] "Generic (PLEG): container finished" podID="5148b9f0-04d6-4dfe-9387-293d146f83c7" containerID="b88fba394347ce5cbbedf45212d7c67fba0496564f8adc3169211cf6961fb1d5" exitCode=0 Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.852977 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"5148b9f0-04d6-4dfe-9387-293d146f83c7","Type":"ContainerDied","Data":"b88fba394347ce5cbbedf45212d7c67fba0496564f8adc3169211cf6961fb1d5"} Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.854277 5110 generic.go:358] "Generic (PLEG): container finished" podID="4baaed1f-91ae-4249-96ed-11c5a93986ab" containerID="c9faa4f696055c1d02ecc9cb93461ce8f8cbe6dbcbdc9243d4cdc276e8a5d592" exitCode=0 Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.854336 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg44w" event={"ID":"4baaed1f-91ae-4249-96ed-11c5a93986ab","Type":"ContainerDied","Data":"c9faa4f696055c1d02ecc9cb93461ce8f8cbe6dbcbdc9243d4cdc276e8a5d592"} Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.854498 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg44w" event={"ID":"4baaed1f-91ae-4249-96ed-11c5a93986ab","Type":"ContainerStarted","Data":"424407489d4d5c287109ae543b702896e9fa174afc1e8b5c8d89c9adeacfc74a"} Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.862517 5110 generic.go:358] "Generic (PLEG): container finished" podID="ca4e8bf1-1091-4c8c-b724-9af50f626c94" containerID="04c073bc62cf780fe78979bd3e0ad17efe827b4d851bff105628ec8db1ac16ec" exitCode=0 Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.862714 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v85l6" event={"ID":"ca4e8bf1-1091-4c8c-b724-9af50f626c94","Type":"ContainerDied","Data":"04c073bc62cf780fe78979bd3e0ad17efe827b4d851bff105628ec8db1ac16ec"} Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.862744 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v85l6" event={"ID":"ca4e8bf1-1091-4c8c-b724-9af50f626c94","Type":"ContainerStarted","Data":"1ced04319b73ea16613f37342d47aa1492e26b96f9ef8ed7caba58e613bbb8d5"} Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.874265 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:14 crc kubenswrapper[5110]: E0122 14:17:14.875101 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.37507377 +0000 UTC m=+115.597158119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.878811 5110 generic.go:358] "Generic (PLEG): container finished" podID="d099c037-9022-46df-8a66-e2856ee9dbd9" containerID="515a7907df2eb4c829879d15bad44609ed7c3871e29fa7b71c78271a206e1091" exitCode=0 Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.878961 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzv2" event={"ID":"d099c037-9022-46df-8a66-e2856ee9dbd9","Type":"ContainerDied","Data":"515a7907df2eb4c829879d15bad44609ed7c3871e29fa7b71c78271a206e1091"} Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.895959 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.896017 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484855-8xz9c" event={"ID":"5de37d4a-279a-45d0-ba01-0749e4b765a0","Type":"ContainerDied","Data":"437e5cd8034e4cc033c254a9b5af69f702909940a763e755cd070086c101ffd8"} Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.896054 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="437e5cd8034e4cc033c254a9b5af69f702909940a763e755cd070086c101ffd8" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.897181 5110 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-74rfw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.897225 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" podUID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.943537 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-trs4m"] Jan 22 14:17:14 crc kubenswrapper[5110]: I0122 14:17:14.976740 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:14 crc kubenswrapper[5110]: E0122 14:17:14.978999 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.478982344 +0000 UTC m=+115.701066693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: W0122 14:17:15.029111 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f6cc7b0_6a05_44c4_baba_0decaa2ad061.slice/crio-d0f8b61bdc57fd3aad2497fb7def94cb9114386b2177a61fe93b1e3b5f996f82 WatchSource:0}: Error finding container d0f8b61bdc57fd3aad2497fb7def94cb9114386b2177a61fe93b1e3b5f996f82: Status 404 returned error can't find the container with id d0f8b61bdc57fd3aad2497fb7def94cb9114386b2177a61fe93b1e3b5f996f82 Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.079119 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.079313 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.579286883 +0000 UTC m=+115.801371242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.079468 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.079806 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.579793197 +0000 UTC m=+115.801877556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.142579 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wwt5t"] Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.155161 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5de37d4a-279a-45d0-ba01-0749e4b765a0" containerName="collect-profiles" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.155192 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de37d4a-279a-45d0-ba01-0749e4b765a0" containerName="collect-profiles" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.155349 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="5de37d4a-279a-45d0-ba01-0749e4b765a0" containerName="collect-profiles" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.164831 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.168557 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.173019 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wwt5t"] Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.182129 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.182502 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.682482948 +0000 UTC m=+115.904567307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.182302 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b263fe-d08c-46d3-ba73-30ad3e8deec1-catalog-content\") pod \"redhat-operators-wwt5t\" (UID: \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\") " pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.182724 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.182920 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b263fe-d08c-46d3-ba73-30ad3e8deec1-utilities\") pod \"redhat-operators-wwt5t\" (UID: \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\") " pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.182977 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztpx9\" (UniqueName: \"kubernetes.io/projected/39b263fe-d08c-46d3-ba73-30ad3e8deec1-kube-api-access-ztpx9\") pod \"redhat-operators-wwt5t\" (UID: \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\") " pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.183315 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.68330686 +0000 UTC m=+115.905391219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.234758 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:15 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:15 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:15 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.234831 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.273905 5110 scope.go:117] "RemoveContainer" containerID="d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.284417 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.284637 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.784596064 +0000 UTC m=+116.006680423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.284748 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ztpx9\" (UniqueName: \"kubernetes.io/projected/39b263fe-d08c-46d3-ba73-30ad3e8deec1-kube-api-access-ztpx9\") pod \"redhat-operators-wwt5t\" (UID: \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\") " pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.284800 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b263fe-d08c-46d3-ba73-30ad3e8deec1-catalog-content\") pod \"redhat-operators-wwt5t\" (UID: \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\") " pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.284862 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.285040 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b263fe-d08c-46d3-ba73-30ad3e8deec1-utilities\") pod \"redhat-operators-wwt5t\" (UID: \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\") " pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.285270 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b263fe-d08c-46d3-ba73-30ad3e8deec1-catalog-content\") pod \"redhat-operators-wwt5t\" (UID: \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\") " pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.285312 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.785295033 +0000 UTC m=+116.007379392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.285352 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b263fe-d08c-46d3-ba73-30ad3e8deec1-utilities\") pod \"redhat-operators-wwt5t\" (UID: \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\") " pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.303925 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztpx9\" (UniqueName: \"kubernetes.io/projected/39b263fe-d08c-46d3-ba73-30ad3e8deec1-kube-api-access-ztpx9\") pod \"redhat-operators-wwt5t\" (UID: \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\") " pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.386802 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.387114 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.887086491 +0000 UTC m=+116.109170850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.486712 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.488292 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.488686 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:15.988671423 +0000 UTC m=+116.210755782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.561123 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-khw24"] Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.589453 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.589849 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:16.089833715 +0000 UTC m=+116.311918074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.690594 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.691074 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:16.191057378 +0000 UTC m=+116.413141737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.730642 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khw24"] Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.730800 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.792592 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.792908 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xz4z\" (UniqueName: \"kubernetes.io/projected/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-kube-api-access-8xz4z\") pod \"redhat-operators-khw24\" (UID: \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\") " pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.793034 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-utilities\") pod \"redhat-operators-khw24\" (UID: \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\") " pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.793173 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-catalog-content\") pod \"redhat-operators-khw24\" (UID: \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\") " pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.793327 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:16.293310728 +0000 UTC m=+116.515395087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.894112 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.894548 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-catalog-content\") pod \"redhat-operators-khw24\" (UID: \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\") " pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.894671 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:16.394655014 +0000 UTC m=+116.616739373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.894710 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8xz4z\" (UniqueName: \"kubernetes.io/projected/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-kube-api-access-8xz4z\") pod \"redhat-operators-khw24\" (UID: \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\") " pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.894841 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-utilities\") pod \"redhat-operators-khw24\" (UID: \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\") " pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.895242 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-utilities\") pod \"redhat-operators-khw24\" (UID: \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\") " pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.896056 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-catalog-content\") pod \"redhat-operators-khw24\" (UID: \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\") " pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.920485 5110 generic.go:358] "Generic (PLEG): container finished" podID="9f6cc7b0-6a05-44c4-baba-0decaa2ad061" containerID="cc34c44c234ed7625e72b04a55b156b51b3e577fde7629885b017f8b0f35c1e2" exitCode=0 Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.920590 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trs4m" event={"ID":"9f6cc7b0-6a05-44c4-baba-0decaa2ad061","Type":"ContainerDied","Data":"cc34c44c234ed7625e72b04a55b156b51b3e577fde7629885b017f8b0f35c1e2"} Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.920683 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trs4m" event={"ID":"9f6cc7b0-6a05-44c4-baba-0decaa2ad061","Type":"ContainerStarted","Data":"d0f8b61bdc57fd3aad2497fb7def94cb9114386b2177a61fe93b1e3b5f996f82"} Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.920969 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xz4z\" (UniqueName: \"kubernetes.io/projected/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-kube-api-access-8xz4z\") pod \"redhat-operators-khw24\" (UID: \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\") " pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.928373 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wwt5t"] Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.953876 5110 generic.go:358] "Generic (PLEG): container finished" podID="0c5d2009-8141-4816-b0d1-350eaee192ef" containerID="5242cc1f572a5eff9a10abc411fe69f45c879e72512d1af899e61b6fb7a448f7" exitCode=0 Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.954004 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk6fr" event={"ID":"0c5d2009-8141-4816-b0d1-350eaee192ef","Type":"ContainerDied","Data":"5242cc1f572a5eff9a10abc411fe69f45c879e72512d1af899e61b6fb7a448f7"} Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.966254 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.997823 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:15 crc kubenswrapper[5110]: I0122 14:17:15.998752 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8jstw" event={"ID":"a1a05965-c2b8-41e0-89d2-973217252f27","Type":"ContainerStarted","Data":"f704720c1093719c654458948f3ca267de78ce292da1cf0bb871c658d81f63c2"} Jan 22 14:17:15 crc kubenswrapper[5110]: E0122 14:17:15.998888 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:16.498866226 +0000 UTC m=+116.720950585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.071922 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.100224 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.100582 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:16.600564572 +0000 UTC m=+116.822648931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.109296 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.109377 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.134850 5110 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-knd9m container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 22 14:17:16 crc kubenswrapper[5110]: [+]log ok Jan 22 14:17:16 crc kubenswrapper[5110]: [+]etcd ok Jan 22 14:17:16 crc kubenswrapper[5110]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 22 14:17:16 crc kubenswrapper[5110]: [+]poststarthook/generic-apiserver-start-informers ok Jan 22 14:17:16 crc kubenswrapper[5110]: [+]poststarthook/max-in-flight-filter ok Jan 22 14:17:16 crc kubenswrapper[5110]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 22 14:17:16 crc kubenswrapper[5110]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 22 14:17:16 crc kubenswrapper[5110]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 22 14:17:16 crc kubenswrapper[5110]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 22 14:17:16 crc kubenswrapper[5110]: [+]poststarthook/project.openshift.io-projectcache ok Jan 22 14:17:16 crc kubenswrapper[5110]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 22 14:17:16 crc kubenswrapper[5110]: [+]poststarthook/openshift.io-startinformers ok Jan 22 14:17:16 crc kubenswrapper[5110]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 22 14:17:16 crc kubenswrapper[5110]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 22 14:17:16 crc kubenswrapper[5110]: livez check failed Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.134908 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" podUID="7af25bf8-c994-4704-821b-ee6df60d64f1" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.201164 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.201509 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:16.701454107 +0000 UTC m=+116.923538466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.201665 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.202328 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:16.702312499 +0000 UTC m=+116.924396918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.206141 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.206189 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.209571 5110 patch_prober.go:28] interesting pod/console-64d44f6ddf-pk2pv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.209640 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-pk2pv" podUID="5c99795b-25a0-4c75-87ba-3c72c10f621d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.237513 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.242790 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:16 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:16 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:16 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.243123 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.303024 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.304440 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:16.804420936 +0000 UTC m=+117.026505295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.407881 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.411168 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.411501 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:16.911487053 +0000 UTC m=+117.133571412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.514093 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5148b9f0-04d6-4dfe-9387-293d146f83c7-kube-api-access\") pod \"5148b9f0-04d6-4dfe-9387-293d146f83c7\" (UID: \"5148b9f0-04d6-4dfe-9387-293d146f83c7\") " Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.514251 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.514273 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5148b9f0-04d6-4dfe-9387-293d146f83c7-kubelet-dir\") pod \"5148b9f0-04d6-4dfe-9387-293d146f83c7\" (UID: \"5148b9f0-04d6-4dfe-9387-293d146f83c7\") " Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.514507 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5148b9f0-04d6-4dfe-9387-293d146f83c7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5148b9f0-04d6-4dfe-9387-293d146f83c7" (UID: "5148b9f0-04d6-4dfe-9387-293d146f83c7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.515370 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.015338736 +0000 UTC m=+117.237423095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.520762 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5148b9f0-04d6-4dfe-9387-293d146f83c7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5148b9f0-04d6-4dfe-9387-293d146f83c7" (UID: "5148b9f0-04d6-4dfe-9387-293d146f83c7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.528378 5110 ???:1] "http: TLS handshake error from 192.168.126.11:57936: no serving certificate available for the kubelet" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.636440 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.636793 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5148b9f0-04d6-4dfe-9387-293d146f83c7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.636805 5110 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5148b9f0-04d6-4dfe-9387-293d146f83c7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.637728 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.137705317 +0000 UTC m=+117.359789676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.738261 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.738516 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.238471468 +0000 UTC m=+117.460555827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.738712 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.739144 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.239104315 +0000 UTC m=+117.461188674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.777796 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khw24"] Jan 22 14:17:16 crc kubenswrapper[5110]: W0122 14:17:16.803953 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7eb5f07_02fb_4d06_95f6_e0d5652bae89.slice/crio-1e6b683d7cfd8f7638ed40ff3ff46f0e26abf041d98ae5af3762f524b77e2481 WatchSource:0}: Error finding container 1e6b683d7cfd8f7638ed40ff3ff46f0e26abf041d98ae5af3762f524b77e2481: Status 404 returned error can't find the container with id 1e6b683d7cfd8f7638ed40ff3ff46f0e26abf041d98ae5af3762f524b77e2481 Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.840875 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.841029 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.341004416 +0000 UTC m=+117.563088785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.841215 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.841728 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.341718795 +0000 UTC m=+117.563803154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.942984 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.943159 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.443136083 +0000 UTC m=+117.665220442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:16 crc kubenswrapper[5110]: I0122 14:17:16.943451 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:16 crc kubenswrapper[5110]: E0122 14:17:16.943795 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.44378552 +0000 UTC m=+117.665869879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.008995 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.010930 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"5e10fa56c9f05a69f1fa1ac51c4fbf617c845aba6e15a4fc67169d365931259c"} Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.011381 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.015859 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khw24" event={"ID":"d7eb5f07-02fb-4d06-95f6-e0d5652bae89","Type":"ContainerStarted","Data":"51534cfb7608ed2de01cf801974da77f74ae000d9aad4ba2c651b91c64aeac6a"} Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.015918 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khw24" event={"ID":"d7eb5f07-02fb-4d06-95f6-e0d5652bae89","Type":"ContainerStarted","Data":"1e6b683d7cfd8f7638ed40ff3ff46f0e26abf041d98ae5af3762f524b77e2481"} Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.019555 5110 generic.go:358] "Generic (PLEG): container finished" podID="39b263fe-d08c-46d3-ba73-30ad3e8deec1" containerID="4ec51597be03823ca75541b8bb490f7914fddc51fe0b4d99df49b91874f32b3e" exitCode=0 Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.021381 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwt5t" event={"ID":"39b263fe-d08c-46d3-ba73-30ad3e8deec1","Type":"ContainerDied","Data":"4ec51597be03823ca75541b8bb490f7914fddc51fe0b4d99df49b91874f32b3e"} Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.021429 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwt5t" event={"ID":"39b263fe-d08c-46d3-ba73-30ad3e8deec1","Type":"ContainerStarted","Data":"d2087f1e44b69397528ea62501acc00e4cc7b140643d3dd6a4b423392ef7536a"} Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.034146 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"5148b9f0-04d6-4dfe-9387-293d146f83c7","Type":"ContainerDied","Data":"6a109e0c6917aa4a075a22f7fe800bdd537ca01e8deaf72bbe0c92b6aa9f7000"} Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.034194 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a109e0c6917aa4a075a22f7fe800bdd537ca01e8deaf72bbe0c92b6aa9f7000" Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.035772 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.038222 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=29.038209804 podStartE2EDuration="29.038209804s" podCreationTimestamp="2026-01-22 14:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:17.033589152 +0000 UTC m=+117.255673521" watchObservedRunningTime="2026-01-22 14:17:17.038209804 +0000 UTC m=+117.260294163" Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.047301 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.047483 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.547460208 +0000 UTC m=+117.769544567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.047763 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.048124 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.548117165 +0000 UTC m=+117.770201524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.126084 5110 patch_prober.go:28] interesting pod/downloads-747b44746d-swx4b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.126442 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-swx4b" podUID="f89c668c-680e-4342-9003-c7140b9f5d51" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.149315 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.151973 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.651950947 +0000 UTC m=+117.874035306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.159867 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.159911 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.167202 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.236249 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:17 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:17 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:17 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.236318 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.251650 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.252192 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.752168684 +0000 UTC m=+117.974253053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.353022 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.353229 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.853195042 +0000 UTC m=+118.075279401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.353345 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.353789 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.853774967 +0000 UTC m=+118.075859326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.455240 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.455539 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:17.955521424 +0000 UTC m=+118.177605783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.556713 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.557058 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:18.057044495 +0000 UTC m=+118.279128854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.664349 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.664734 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:18.164714488 +0000 UTC m=+118.386798857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.767658 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.768232 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:18.268217022 +0000 UTC m=+118.490301381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.868611 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.868824 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:18.368793948 +0000 UTC m=+118.590878297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.869231 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.869615 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:18.369602589 +0000 UTC m=+118.591686948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.970941 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.971092 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:18.471066919 +0000 UTC m=+118.693151278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:17 crc kubenswrapper[5110]: I0122 14:17:17.971325 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:17 crc kubenswrapper[5110]: E0122 14:17:17.971724 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:18.471707245 +0000 UTC m=+118.693791604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.044133 5110 generic.go:358] "Generic (PLEG): container finished" podID="d7eb5f07-02fb-4d06-95f6-e0d5652bae89" containerID="51534cfb7608ed2de01cf801974da77f74ae000d9aad4ba2c651b91c64aeac6a" exitCode=0 Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.044194 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khw24" event={"ID":"d7eb5f07-02fb-4d06-95f6-e0d5652bae89","Type":"ContainerDied","Data":"51534cfb7608ed2de01cf801974da77f74ae000d9aad4ba2c651b91c64aeac6a"} Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.053983 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-4tb8q" Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.084206 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:18 crc kubenswrapper[5110]: E0122 14:17:18.085412 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:18.585386438 +0000 UTC m=+118.807470797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:18 crc kubenswrapper[5110]: E0122 14:17:18.186768 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:18.686754654 +0000 UTC m=+118.908839013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.187096 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.233789 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:18 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:18 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:18 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.233842 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.297557 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:18 crc kubenswrapper[5110]: E0122 14:17:18.298345 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:18.798315671 +0000 UTC m=+119.020400030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.401137 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:18 crc kubenswrapper[5110]: E0122 14:17:18.401633 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:18.901584038 +0000 UTC m=+119.123668397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.502891 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:18 crc kubenswrapper[5110]: E0122 14:17:18.503829 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.003801867 +0000 UTC m=+119.225886226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.606437 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:18 crc kubenswrapper[5110]: E0122 14:17:18.607010 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.106993172 +0000 UTC m=+119.329077531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.708344 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:18 crc kubenswrapper[5110]: E0122 14:17:18.708549 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.208524983 +0000 UTC m=+119.430609342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.709125 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:18 crc kubenswrapper[5110]: E0122 14:17:18.709716 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.209704695 +0000 UTC m=+119.431789054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:18 crc kubenswrapper[5110]: E0122 14:17:18.811834 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.311811511 +0000 UTC m=+119.533895870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.811929 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.812573 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:18 crc kubenswrapper[5110]: E0122 14:17:18.813363 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.313169317 +0000 UTC m=+119.535253676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.913806 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:18 crc kubenswrapper[5110]: E0122 14:17:18.913990 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.413973448 +0000 UTC m=+119.636057797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:18 crc kubenswrapper[5110]: I0122 14:17:18.914499 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:18 crc kubenswrapper[5110]: E0122 14:17:18.914944 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.414935673 +0000 UTC m=+119.637020022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.016896 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.017066 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.517025389 +0000 UTC m=+119.739109758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.017797 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.018310 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.518292203 +0000 UTC m=+119.740376562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.120018 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.121007 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.620987295 +0000 UTC m=+119.843071644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.222465 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.222911 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.722894626 +0000 UTC m=+119.944978985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.235642 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:19 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:19 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:19 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.235714 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.324471 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.324702 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.824678784 +0000 UTC m=+120.046763143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.324794 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.325119 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.825104005 +0000 UTC m=+120.047188364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.429660 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.430136 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:19.930116678 +0000 UTC m=+120.152201037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.531599 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.531925 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.031909966 +0000 UTC m=+120.253994325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.553395 5110 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.636820 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.637159 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.137143555 +0000 UTC m=+120.359227914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.738525 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.739048 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.239002455 +0000 UTC m=+120.461086814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.840110 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.840322 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.34029395 +0000 UTC m=+120.562378309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.840583 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.840964 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.340951098 +0000 UTC m=+120.563035457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.856052 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-flfnb" Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.941792 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.941965 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.441936864 +0000 UTC m=+120.664021223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:19 crc kubenswrapper[5110]: I0122 14:17:19.942514 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:19 crc kubenswrapper[5110]: E0122 14:17:19.942989 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.442973042 +0000 UTC m=+120.665057411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.043973 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:20 crc kubenswrapper[5110]: E0122 14:17:20.044200 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.544172634 +0000 UTC m=+120.766256993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.044683 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:20 crc kubenswrapper[5110]: E0122 14:17:20.045013 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.544997596 +0000 UTC m=+120.767081955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.063948 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8jstw" event={"ID":"a1a05965-c2b8-41e0-89d2-973217252f27","Type":"ContainerStarted","Data":"5e12a71cb745e0447602deb740f9497c35770bba0312f99885978e75364739cf"} Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.146302 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:20 crc kubenswrapper[5110]: E0122 14:17:20.146494 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.646441105 +0000 UTC m=+120.868525464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.235895 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:20 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:20 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:20 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.235957 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.248203 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:20 crc kubenswrapper[5110]: E0122 14:17:20.248716 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.748692725 +0000 UTC m=+120.970777094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.348951 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:20 crc kubenswrapper[5110]: E0122 14:17:20.349518 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.849182929 +0000 UTC m=+121.071267288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.350021 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:20 crc kubenswrapper[5110]: E0122 14:17:20.350353 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.85034511 +0000 UTC m=+121.072429469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.397080 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.397970 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5148b9f0-04d6-4dfe-9387-293d146f83c7" containerName="pruner" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.397988 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="5148b9f0-04d6-4dfe-9387-293d146f83c7" containerName="pruner" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.398186 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="5148b9f0-04d6-4dfe-9387-293d146f83c7" containerName="pruner" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.404338 5110 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T14:17:19.553426755Z","UUID":"1cdcc534-5cb6-4ba6-bc1b-f99d882699fd","Handler":null,"Name":"","Endpoint":""} Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.450875 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:20 crc kubenswrapper[5110]: E0122 14:17:20.450997 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.950977007 +0000 UTC m=+121.173061366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.451808 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:20 crc kubenswrapper[5110]: E0122 14:17:20.452133 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 14:17:20.952116277 +0000 UTC m=+121.174200636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-x4jp5" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.553297 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:20 crc kubenswrapper[5110]: E0122 14:17:20.553699 5110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 14:17:21.053679149 +0000 UTC m=+121.275763508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.575285 5110 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.575324 5110 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.655248 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.658078 5110 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.658109 5110 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.959951 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.960038 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.960078 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.960193 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.961861 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.962133 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.962151 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.972282 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.994951 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.996299 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:17:20 crc kubenswrapper[5110]: I0122 14:17:20.997045 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.020341 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.045645 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-x4jp5\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.061901 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.062141 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs\") pod \"network-metrics-daemon-js5pl\" (UID: \"455fa20f-c1d4-4086-8874-9526d4c4d24d\") " pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.064008 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.070089 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.076590 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/455fa20f-c1d4-4086-8874-9526d4c4d24d-metrics-certs\") pod \"network-metrics-daemon-js5pl\" (UID: \"455fa20f-c1d4-4086-8874-9526d4c4d24d\") " pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:17:21 crc kubenswrapper[5110]: E0122 14:17:21.130965 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:17:21 crc kubenswrapper[5110]: E0122 14:17:21.133300 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:17:21 crc kubenswrapper[5110]: E0122 14:17:21.134371 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:17:21 crc kubenswrapper[5110]: E0122 14:17:21.134401 5110 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" podUID="04a3cce4-5dc1-418d-a112-6d9e30fdbc52" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.150332 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.150381 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.150556 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.152760 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.152952 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.161184 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-knd9m" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.167805 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-s875b" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.187264 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.189356 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.198125 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.202778 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.205071 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.208756 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-js5pl" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.210701 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.243706 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:21 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:21 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:21 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.243799 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.263728 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24ffcf63-716b-448f-8ee8-bd42f0a9c192-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"24ffcf63-716b-448f-8ee8-bd42f0a9c192\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.263869 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24ffcf63-716b-448f-8ee8-bd42f0a9c192-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"24ffcf63-716b-448f-8ee8-bd42f0a9c192\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.365330 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24ffcf63-716b-448f-8ee8-bd42f0a9c192-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"24ffcf63-716b-448f-8ee8-bd42f0a9c192\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.365464 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24ffcf63-716b-448f-8ee8-bd42f0a9c192-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"24ffcf63-716b-448f-8ee8-bd42f0a9c192\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.365549 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24ffcf63-716b-448f-8ee8-bd42f0a9c192-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"24ffcf63-716b-448f-8ee8-bd42f0a9c192\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.384886 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24ffcf63-716b-448f-8ee8-bd42f0a9c192-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"24ffcf63-716b-448f-8ee8-bd42f0a9c192\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.489028 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:17:21 crc kubenswrapper[5110]: I0122 14:17:21.689756 5110 ???:1] "http: TLS handshake error from 192.168.126.11:57938: no serving certificate available for the kubelet" Jan 22 14:17:22 crc kubenswrapper[5110]: I0122 14:17:22.234415 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:22 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:22 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:22 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:22 crc kubenswrapper[5110]: I0122 14:17:22.234494 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:22 crc kubenswrapper[5110]: I0122 14:17:22.282874 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 22 14:17:23 crc kubenswrapper[5110]: I0122 14:17:23.234954 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:23 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:23 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:23 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:23 crc kubenswrapper[5110]: I0122 14:17:23.235368 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:23 crc kubenswrapper[5110]: I0122 14:17:23.860382 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-swx4b" Jan 22 14:17:24 crc kubenswrapper[5110]: I0122 14:17:24.235181 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:24 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:24 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:24 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:24 crc kubenswrapper[5110]: I0122 14:17:24.235263 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:24 crc kubenswrapper[5110]: I0122 14:17:24.899089 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:17:25 crc kubenswrapper[5110]: I0122 14:17:25.233537 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:25 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:25 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:25 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:25 crc kubenswrapper[5110]: I0122 14:17:25.233667 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:26 crc kubenswrapper[5110]: I0122 14:17:26.206307 5110 patch_prober.go:28] interesting pod/console-64d44f6ddf-pk2pv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 22 14:17:26 crc kubenswrapper[5110]: I0122 14:17:26.207734 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-pk2pv" podUID="5c99795b-25a0-4c75-87ba-3c72c10f621d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 22 14:17:26 crc kubenswrapper[5110]: I0122 14:17:26.235169 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:26 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:26 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:26 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:26 crc kubenswrapper[5110]: I0122 14:17:26.235501 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:27 crc kubenswrapper[5110]: I0122 14:17:27.235729 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:27 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:27 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:27 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:27 crc kubenswrapper[5110]: I0122 14:17:27.236524 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:28 crc kubenswrapper[5110]: I0122 14:17:28.048769 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:17:28 crc kubenswrapper[5110]: I0122 14:17:28.237217 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:28 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:28 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:28 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:28 crc kubenswrapper[5110]: I0122 14:17:28.237296 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:28 crc kubenswrapper[5110]: I0122 14:17:28.884072 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ms6jk" Jan 22 14:17:29 crc kubenswrapper[5110]: I0122 14:17:29.235333 5110 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-qkzl4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 14:17:29 crc kubenswrapper[5110]: [-]has-synced failed: reason withheld Jan 22 14:17:29 crc kubenswrapper[5110]: [+]process-running ok Jan 22 14:17:29 crc kubenswrapper[5110]: healthz check failed Jan 22 14:17:29 crc kubenswrapper[5110]: I0122 14:17:29.235432 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" podUID="abc0de56-1146-46a6-8b5b-68373a09ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 14:17:30 crc kubenswrapper[5110]: I0122 14:17:30.234725 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:30 crc kubenswrapper[5110]: I0122 14:17:30.237700 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-qkzl4" Jan 22 14:17:31 crc kubenswrapper[5110]: E0122 14:17:31.131011 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:17:31 crc kubenswrapper[5110]: E0122 14:17:31.133690 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:17:31 crc kubenswrapper[5110]: E0122 14:17:31.138554 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:17:31 crc kubenswrapper[5110]: E0122 14:17:31.138632 5110 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" podUID="04a3cce4-5dc1-418d-a112-6d9e30fdbc52" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 14:17:31 crc kubenswrapper[5110]: I0122 14:17:31.967454 5110 ???:1] "http: TLS handshake error from 192.168.126.11:48450: no serving certificate available for the kubelet" Jan 22 14:17:36 crc kubenswrapper[5110]: I0122 14:17:36.213074 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:36 crc kubenswrapper[5110]: I0122 14:17:36.223100 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-pk2pv" Jan 22 14:17:41 crc kubenswrapper[5110]: E0122 14:17:41.131374 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:17:41 crc kubenswrapper[5110]: E0122 14:17:41.133305 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:17:41 crc kubenswrapper[5110]: E0122 14:17:41.134859 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:17:41 crc kubenswrapper[5110]: E0122 14:17:41.134913 5110 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" podUID="04a3cce4-5dc1-418d-a112-6d9e30fdbc52" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 14:17:42 crc kubenswrapper[5110]: I0122 14:17:42.569751 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 14:17:43 crc kubenswrapper[5110]: I0122 14:17:43.849549 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2m627" Jan 22 14:17:45 crc kubenswrapper[5110]: I0122 14:17:45.238339 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rjmdz_04a3cce4-5dc1-418d-a112-6d9e30fdbc52/kube-multus-additional-cni-plugins/0.log" Jan 22 14:17:45 crc kubenswrapper[5110]: I0122 14:17:45.238580 5110 generic.go:358] "Generic (PLEG): container finished" podID="04a3cce4-5dc1-418d-a112-6d9e30fdbc52" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" exitCode=137 Jan 22 14:17:45 crc kubenswrapper[5110]: I0122 14:17:45.238599 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" event={"ID":"04a3cce4-5dc1-418d-a112-6d9e30fdbc52","Type":"ContainerDied","Data":"1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f"} Jan 22 14:17:51 crc kubenswrapper[5110]: E0122 14:17:51.129936 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f is running failed: container process not found" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:17:51 crc kubenswrapper[5110]: E0122 14:17:51.130879 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f is running failed: container process not found" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:17:51 crc kubenswrapper[5110]: E0122 14:17:51.131219 5110 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f is running failed: container process not found" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 14:17:51 crc kubenswrapper[5110]: E0122 14:17:51.131258 5110 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" podUID="04a3cce4-5dc1-418d-a112-6d9e30fdbc52" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 14:17:52 crc kubenswrapper[5110]: I0122 14:17:52.475168 5110 ???:1] "http: TLS handshake error from 192.168.126.11:39828: no serving certificate available for the kubelet" Jan 22 14:17:54 crc kubenswrapper[5110]: I0122 14:17:54.207259 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 14:17:54 crc kubenswrapper[5110]: I0122 14:17:54.602763 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:17:54 crc kubenswrapper[5110]: I0122 14:17:54.617875 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 14:17:54 crc kubenswrapper[5110]: W0122 14:17:54.640000 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod24ffcf63_716b_448f_8ee8_bd42f0a9c192.slice/crio-780d76123dfa173d0a539653531d4c9e9ee305a90ca5d43c872cfd347bf63a54 WatchSource:0}: Error finding container 780d76123dfa173d0a539653531d4c9e9ee305a90ca5d43c872cfd347bf63a54: Status 404 returned error can't find the container with id 780d76123dfa173d0a539653531d4c9e9ee305a90ca5d43c872cfd347bf63a54 Jan 22 14:17:54 crc kubenswrapper[5110]: I0122 14:17:54.702143 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1f029de-5a24-425c-b850-ddef89e6a5f4-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"d1f029de-5a24-425c-b850-ddef89e6a5f4\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:17:54 crc kubenswrapper[5110]: I0122 14:17:54.702414 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1f029de-5a24-425c-b850-ddef89e6a5f4-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"d1f029de-5a24-425c-b850-ddef89e6a5f4\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:17:54 crc kubenswrapper[5110]: I0122 14:17:54.803766 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1f029de-5a24-425c-b850-ddef89e6a5f4-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"d1f029de-5a24-425c-b850-ddef89e6a5f4\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:17:54 crc kubenswrapper[5110]: I0122 14:17:54.803950 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1f029de-5a24-425c-b850-ddef89e6a5f4-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"d1f029de-5a24-425c-b850-ddef89e6a5f4\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:17:54 crc kubenswrapper[5110]: I0122 14:17:54.803960 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1f029de-5a24-425c-b850-ddef89e6a5f4-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"d1f029de-5a24-425c-b850-ddef89e6a5f4\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:17:54 crc kubenswrapper[5110]: I0122 14:17:54.844205 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1f029de-5a24-425c-b850-ddef89e6a5f4-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"d1f029de-5a24-425c-b850-ddef89e6a5f4\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:17:54 crc kubenswrapper[5110]: I0122 14:17:54.984213 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.296002 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"24ffcf63-716b-448f-8ee8-bd42f0a9c192","Type":"ContainerStarted","Data":"780d76123dfa173d0a539653531d4c9e9ee305a90ca5d43c872cfd347bf63a54"} Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.374613 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rjmdz_04a3cce4-5dc1-418d-a112-6d9e30fdbc52/kube-multus-additional-cni-plugins/0.log" Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.374956 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.414824 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-ready\") pod \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.414888 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvjr8\" (UniqueName: \"kubernetes.io/projected/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-kube-api-access-tvjr8\") pod \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.414937 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-cni-sysctl-allowlist\") pod \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.415013 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-tuning-conf-dir\") pod \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\" (UID: \"04a3cce4-5dc1-418d-a112-6d9e30fdbc52\") " Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.415276 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "04a3cce4-5dc1-418d-a112-6d9e30fdbc52" (UID: "04a3cce4-5dc1-418d-a112-6d9e30fdbc52"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.415788 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-ready" (OuterVolumeSpecName: "ready") pod "04a3cce4-5dc1-418d-a112-6d9e30fdbc52" (UID: "04a3cce4-5dc1-418d-a112-6d9e30fdbc52"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.416044 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "04a3cce4-5dc1-418d-a112-6d9e30fdbc52" (UID: "04a3cce4-5dc1-418d-a112-6d9e30fdbc52"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.422119 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-kube-api-access-tvjr8" (OuterVolumeSpecName: "kube-api-access-tvjr8") pod "04a3cce4-5dc1-418d-a112-6d9e30fdbc52" (UID: "04a3cce4-5dc1-418d-a112-6d9e30fdbc52"). InnerVolumeSpecName "kube-api-access-tvjr8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.516606 5110 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.516831 5110 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.516934 5110 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-ready\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.517011 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tvjr8\" (UniqueName: \"kubernetes.io/projected/04a3cce4-5dc1-418d-a112-6d9e30fdbc52-kube-api-access-tvjr8\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:55 crc kubenswrapper[5110]: I0122 14:17:55.776379 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-js5pl"] Jan 22 14:17:56 crc kubenswrapper[5110]: W0122 14:17:56.043682 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod455fa20f_c1d4_4086_8874_9526d4c4d24d.slice/crio-17682abaa8ca0672834ab824b8603fced344c73ddbff5defca76a4ec3328ef66 WatchSource:0}: Error finding container 17682abaa8ca0672834ab824b8603fced344c73ddbff5defca76a4ec3328ef66: Status 404 returned error can't find the container with id 17682abaa8ca0672834ab824b8603fced344c73ddbff5defca76a4ec3328ef66 Jan 22 14:17:56 crc kubenswrapper[5110]: I0122 14:17:56.303514 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rjmdz_04a3cce4-5dc1-418d-a112-6d9e30fdbc52/kube-multus-additional-cni-plugins/0.log" Jan 22 14:17:56 crc kubenswrapper[5110]: I0122 14:17:56.304020 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" Jan 22 14:17:56 crc kubenswrapper[5110]: I0122 14:17:56.304025 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rjmdz" event={"ID":"04a3cce4-5dc1-418d-a112-6d9e30fdbc52","Type":"ContainerDied","Data":"ed142409ae86cd70bec622d0e0f495b9dd898b0a9121d48fc5ad827244c35b99"} Jan 22 14:17:56 crc kubenswrapper[5110]: I0122 14:17:56.304127 5110 scope.go:117] "RemoveContainer" containerID="1d2f7573798a25febcda7b10215422ca0d0188a49a8d798c8d08acf26d34178f" Jan 22 14:17:56 crc kubenswrapper[5110]: I0122 14:17:56.306438 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"56dc0db66cb238b72f63c47e4c83f46a4dfb3c63ae8b48969f33e65177baf223"} Jan 22 14:17:56 crc kubenswrapper[5110]: I0122 14:17:56.308425 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-js5pl" event={"ID":"455fa20f-c1d4-4086-8874-9526d4c4d24d","Type":"ContainerStarted","Data":"17682abaa8ca0672834ab824b8603fced344c73ddbff5defca76a4ec3328ef66"} Jan 22 14:17:56 crc kubenswrapper[5110]: I0122 14:17:56.311435 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"de0f6ec5cd6f4676dccf2767e853f16d6dfc723d35342c9bcc27ace7bec91165"} Jan 22 14:17:56 crc kubenswrapper[5110]: I0122 14:17:56.325782 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rjmdz"] Jan 22 14:17:56 crc kubenswrapper[5110]: I0122 14:17:56.328555 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rjmdz"] Jan 22 14:17:56 crc kubenswrapper[5110]: I0122 14:17:56.433602 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-x4jp5"] Jan 22 14:17:56 crc kubenswrapper[5110]: W0122 14:17:56.514661 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b87002_b798_480a_8e17_83053d698239.slice/crio-02b8de3f8b64af9fd02811c9c3fd8a70b015f6541b4d814cffc198f9d9ee4273 WatchSource:0}: Error finding container 02b8de3f8b64af9fd02811c9c3fd8a70b015f6541b4d814cffc198f9d9ee4273: Status 404 returned error can't find the container with id 02b8de3f8b64af9fd02811c9c3fd8a70b015f6541b4d814cffc198f9d9ee4273 Jan 22 14:17:56 crc kubenswrapper[5110]: W0122 14:17:56.517084 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod595a4ab3_66a1_41ae_93e2_9476c1b14270.slice/crio-2adf73532ed4b98f61985749fce330acd0907657d846235607a7f93d6c6ef873 WatchSource:0}: Error finding container 2adf73532ed4b98f61985749fce330acd0907657d846235607a7f93d6c6ef873: Status 404 returned error can't find the container with id 2adf73532ed4b98f61985749fce330acd0907657d846235607a7f93d6c6ef873 Jan 22 14:17:56 crc kubenswrapper[5110]: I0122 14:17:56.920559 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 14:17:56 crc kubenswrapper[5110]: W0122 14:17:56.936035 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd1f029de_5a24_425c_b850_ddef89e6a5f4.slice/crio-9a816af61795da879489702b306f62462260ac8926a288178bc5aea068f8897c WatchSource:0}: Error finding container 9a816af61795da879489702b306f62462260ac8926a288178bc5aea068f8897c: Status 404 returned error can't find the container with id 9a816af61795da879489702b306f62462260ac8926a288178bc5aea068f8897c Jan 22 14:17:57 crc kubenswrapper[5110]: I0122 14:17:57.318036 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d1f029de-5a24-425c-b850-ddef89e6a5f4","Type":"ContainerStarted","Data":"9a816af61795da879489702b306f62462260ac8926a288178bc5aea068f8897c"} Jan 22 14:17:57 crc kubenswrapper[5110]: I0122 14:17:57.323776 5110 generic.go:358] "Generic (PLEG): container finished" podID="4baaed1f-91ae-4249-96ed-11c5a93986ab" containerID="2596cbe3f2f152915b36a9547f07fcb39da8888bb74c516bc68be36c48af2ba6" exitCode=0 Jan 22 14:17:57 crc kubenswrapper[5110]: I0122 14:17:57.323850 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg44w" event={"ID":"4baaed1f-91ae-4249-96ed-11c5a93986ab","Type":"ContainerDied","Data":"2596cbe3f2f152915b36a9547f07fcb39da8888bb74c516bc68be36c48af2ba6"} Jan 22 14:17:57 crc kubenswrapper[5110]: I0122 14:17:57.326356 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8jstw" event={"ID":"a1a05965-c2b8-41e0-89d2-973217252f27","Type":"ContainerStarted","Data":"7dc555bdd29c63f129d422763d9f58d5a62a691dd0f4952976d80dd0fdd0acbd"} Jan 22 14:17:57 crc kubenswrapper[5110]: I0122 14:17:57.328695 5110 generic.go:358] "Generic (PLEG): container finished" podID="ca4e8bf1-1091-4c8c-b724-9af50f626c94" containerID="ed22d452b1354b674a09fc055330c28b8b1cc9b4b72915c5663399c5b8b5bd2b" exitCode=0 Jan 22 14:17:57 crc kubenswrapper[5110]: I0122 14:17:57.328748 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v85l6" event={"ID":"ca4e8bf1-1091-4c8c-b724-9af50f626c94","Type":"ContainerDied","Data":"ed22d452b1354b674a09fc055330c28b8b1cc9b4b72915c5663399c5b8b5bd2b"} Jan 22 14:17:57 crc kubenswrapper[5110]: I0122 14:17:57.332081 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"24ffcf63-716b-448f-8ee8-bd42f0a9c192","Type":"ContainerStarted","Data":"779ec8b3d416b84342d64ed9ce7f653a414b0596e02cf6168d145373553ee858"} Jan 22 14:17:57 crc kubenswrapper[5110]: I0122 14:17:57.333989 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"02b8de3f8b64af9fd02811c9c3fd8a70b015f6541b4d814cffc198f9d9ee4273"} Jan 22 14:17:57 crc kubenswrapper[5110]: I0122 14:17:57.335294 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" event={"ID":"595a4ab3-66a1-41ae-93e2-9476c1b14270","Type":"ContainerStarted","Data":"2adf73532ed4b98f61985749fce330acd0907657d846235607a7f93d6c6ef873"} Jan 22 14:17:57 crc kubenswrapper[5110]: I0122 14:17:57.337177 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"bba9ebe40d6d90f4ba463757402fe32d4296e15cc654d294ef881bb625f8fd12"} Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.280571 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a3cce4-5dc1-418d-a112-6d9e30fdbc52" path="/var/lib/kubelet/pods/04a3cce4-5dc1-418d-a112-6d9e30fdbc52/volumes" Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.351236 5110 generic.go:358] "Generic (PLEG): container finished" podID="e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" containerID="1b3a9651196fb50cdf2ee2d0efbd419ceef76ae5900cfd11e9c9a68366068950" exitCode=0 Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.351338 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6z98k" event={"ID":"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7","Type":"ContainerDied","Data":"1b3a9651196fb50cdf2ee2d0efbd419ceef76ae5900cfd11e9c9a68366068950"} Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.354403 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khw24" event={"ID":"d7eb5f07-02fb-4d06-95f6-e0d5652bae89","Type":"ContainerStarted","Data":"c8f662670a503d1e755406c28d86377a7fe9dce90431d238f57bd35d2c4d5345"} Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.356588 5110 generic.go:358] "Generic (PLEG): container finished" podID="9f6cc7b0-6a05-44c4-baba-0decaa2ad061" containerID="30cb93666c4e2ff4bc921056f9bcb90b995bc2c2111320c3bd70dc6ad991016f" exitCode=0 Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.356657 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trs4m" event={"ID":"9f6cc7b0-6a05-44c4-baba-0decaa2ad061","Type":"ContainerDied","Data":"30cb93666c4e2ff4bc921056f9bcb90b995bc2c2111320c3bd70dc6ad991016f"} Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.364780 5110 generic.go:358] "Generic (PLEG): container finished" podID="0c5d2009-8141-4816-b0d1-350eaee192ef" containerID="1e3f8b7ee1cc818bc9b40e013850ba1a6381589dec276d1c9ce553a43399eb17" exitCode=0 Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.364895 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk6fr" event={"ID":"0c5d2009-8141-4816-b0d1-350eaee192ef","Type":"ContainerDied","Data":"1e3f8b7ee1cc818bc9b40e013850ba1a6381589dec276d1c9ce553a43399eb17"} Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.367945 5110 generic.go:358] "Generic (PLEG): container finished" podID="39b263fe-d08c-46d3-ba73-30ad3e8deec1" containerID="a7044646b488586b273e861c198848ec5cba3559e937f0ae37945da970f05c1a" exitCode=0 Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.367996 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwt5t" event={"ID":"39b263fe-d08c-46d3-ba73-30ad3e8deec1","Type":"ContainerDied","Data":"a7044646b488586b273e861c198848ec5cba3559e937f0ae37945da970f05c1a"} Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.375197 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=38.375178209 podStartE2EDuration="38.375178209s" podCreationTimestamp="2026-01-22 14:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:57.416703699 +0000 UTC m=+157.638788068" watchObservedRunningTime="2026-01-22 14:17:58.375178209 +0000 UTC m=+158.597262578" Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.375558 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"1e2e6b7bb106b8742d465c91ab8162e0a5bb53f05798dff82e8304ad81b2a6d2"} Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.377652 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-js5pl" event={"ID":"455fa20f-c1d4-4086-8874-9526d4c4d24d","Type":"ContainerStarted","Data":"756afccb9a35b913e75d6c363bc8d2b7dadaeddd4411c4296c7710704648ec37"} Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.381188 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8jstw" event={"ID":"a1a05965-c2b8-41e0-89d2-973217252f27","Type":"ContainerStarted","Data":"612d20b14262afbf5b6b9f4a9c561f229de7686f5345dbfab868df4e8088d32c"} Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.383307 5110 generic.go:358] "Generic (PLEG): container finished" podID="24ffcf63-716b-448f-8ee8-bd42f0a9c192" containerID="779ec8b3d416b84342d64ed9ce7f653a414b0596e02cf6168d145373553ee858" exitCode=0 Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.383417 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"24ffcf63-716b-448f-8ee8-bd42f0a9c192","Type":"ContainerDied","Data":"779ec8b3d416b84342d64ed9ce7f653a414b0596e02cf6168d145373553ee858"} Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.385308 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"d901e95163f6372494658a5be2da408a6d04cbe2865085e9a0d777c226b36b67"} Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.388867 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" event={"ID":"595a4ab3-66a1-41ae-93e2-9476c1b14270","Type":"ContainerStarted","Data":"febfc102e4f0d153ee6aa7f7ca86db3f96a6b5b21de6b520b2fb4197a0e3f24f"} Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.408145 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:17:58 crc kubenswrapper[5110]: I0122 14:17:58.947142 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.400670 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6z98k" event={"ID":"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7","Type":"ContainerStarted","Data":"919e3d2a8af0cde2a617c7092598cb61e1cec71cdb1e3a7a3a1d2cf9a4eef276"} Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.402334 5110 generic.go:358] "Generic (PLEG): container finished" podID="d7eb5f07-02fb-4d06-95f6-e0d5652bae89" containerID="c8f662670a503d1e755406c28d86377a7fe9dce90431d238f57bd35d2c4d5345" exitCode=0 Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.402442 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khw24" event={"ID":"d7eb5f07-02fb-4d06-95f6-e0d5652bae89","Type":"ContainerDied","Data":"c8f662670a503d1e755406c28d86377a7fe9dce90431d238f57bd35d2c4d5345"} Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.405552 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trs4m" event={"ID":"9f6cc7b0-6a05-44c4-baba-0decaa2ad061","Type":"ContainerStarted","Data":"1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca"} Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.407147 5110 generic.go:358] "Generic (PLEG): container finished" podID="d1f029de-5a24-425c-b850-ddef89e6a5f4" containerID="925413f6b27c6d9638bc20bca1319fa7fc8e642d4bf172b0f3e6242491324906" exitCode=0 Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.407222 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d1f029de-5a24-425c-b850-ddef89e6a5f4","Type":"ContainerDied","Data":"925413f6b27c6d9638bc20bca1319fa7fc8e642d4bf172b0f3e6242491324906"} Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.409597 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwt5t" event={"ID":"39b263fe-d08c-46d3-ba73-30ad3e8deec1","Type":"ContainerStarted","Data":"938e023755b1d0b6046a3ce5116f8dfcb783679ba287bf10d9661e111e0a2927"} Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.411054 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-js5pl" event={"ID":"455fa20f-c1d4-4086-8874-9526d4c4d24d","Type":"ContainerStarted","Data":"ad23b9fe53ee74e9e37d09d48202b3818008b45b87a5c198cbc0e4a543b6f0f7"} Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.415122 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg44w" event={"ID":"4baaed1f-91ae-4249-96ed-11c5a93986ab","Type":"ContainerStarted","Data":"2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34"} Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.417350 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v85l6" event={"ID":"ca4e8bf1-1091-4c8c-b724-9af50f626c94","Type":"ContainerStarted","Data":"d026a64aad25e9c870ec43fbfc74d2be6fb7317440a7a30dfb307ca0587ff860"} Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.418798 5110 generic.go:358] "Generic (PLEG): container finished" podID="d099c037-9022-46df-8a66-e2856ee9dbd9" containerID="2a1b4f216ea26c5181aedf4d36e52d9701d7b6eb69a3850e09217bf1ac8f695d" exitCode=0 Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.418823 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzv2" event={"ID":"d099c037-9022-46df-8a66-e2856ee9dbd9","Type":"ContainerDied","Data":"2a1b4f216ea26c5181aedf4d36e52d9701d7b6eb69a3850e09217bf1ac8f695d"} Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.423263 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6z98k" podStartSLOduration=5.67492795 podStartE2EDuration="48.423225566s" podCreationTimestamp="2026-01-22 14:17:11 +0000 UTC" firstStartedPulling="2026-01-22 14:17:13.769983697 +0000 UTC m=+113.992068056" lastFinishedPulling="2026-01-22 14:17:56.518281283 +0000 UTC m=+156.740365672" observedRunningTime="2026-01-22 14:17:59.419613751 +0000 UTC m=+159.641698120" watchObservedRunningTime="2026-01-22 14:17:59.423225566 +0000 UTC m=+159.645309925" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.425272 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" podStartSLOduration=139.42526302 podStartE2EDuration="2m19.42526302s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:58.973036767 +0000 UTC m=+159.195121126" watchObservedRunningTime="2026-01-22 14:17:59.42526302 +0000 UTC m=+159.647347379" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.470918 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wwt5t" podStartSLOduration=4.97319387 podStartE2EDuration="44.470902815s" podCreationTimestamp="2026-01-22 14:17:15 +0000 UTC" firstStartedPulling="2026-01-22 14:17:17.021223265 +0000 UTC m=+117.243307614" lastFinishedPulling="2026-01-22 14:17:56.5189322 +0000 UTC m=+156.741016559" observedRunningTime="2026-01-22 14:17:59.468876322 +0000 UTC m=+159.690960701" watchObservedRunningTime="2026-01-22 14:17:59.470902815 +0000 UTC m=+159.692987174" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.496932 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8jstw" podStartSLOduration=56.496019778 podStartE2EDuration="56.496019778s" podCreationTimestamp="2026-01-22 14:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:59.495997968 +0000 UTC m=+159.718082337" watchObservedRunningTime="2026-01-22 14:17:59.496019778 +0000 UTC m=+159.718104137" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.522865 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rg44w" podStartSLOduration=6.994055314 podStartE2EDuration="47.522845337s" podCreationTimestamp="2026-01-22 14:17:12 +0000 UTC" firstStartedPulling="2026-01-22 14:17:14.855328669 +0000 UTC m=+115.077413028" lastFinishedPulling="2026-01-22 14:17:55.384118692 +0000 UTC m=+155.606203051" observedRunningTime="2026-01-22 14:17:59.520647749 +0000 UTC m=+159.742732118" watchObservedRunningTime="2026-01-22 14:17:59.522845337 +0000 UTC m=+159.744929696" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.545302 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-js5pl" podStartSLOduration=139.545277189 podStartE2EDuration="2m19.545277189s" podCreationTimestamp="2026-01-22 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:17:59.540278307 +0000 UTC m=+159.762362676" watchObservedRunningTime="2026-01-22 14:17:59.545277189 +0000 UTC m=+159.767361548" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.572266 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-trs4m" podStartSLOduration=6.167004257 podStartE2EDuration="45.572249801s" podCreationTimestamp="2026-01-22 14:17:14 +0000 UTC" firstStartedPulling="2026-01-22 14:17:15.921477353 +0000 UTC m=+116.143561702" lastFinishedPulling="2026-01-22 14:17:55.326722887 +0000 UTC m=+155.548807246" observedRunningTime="2026-01-22 14:17:59.570953307 +0000 UTC m=+159.793037676" watchObservedRunningTime="2026-01-22 14:17:59.572249801 +0000 UTC m=+159.794334160" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.676692 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v85l6" podStartSLOduration=7.156073652 podStartE2EDuration="47.676679129s" podCreationTimestamp="2026-01-22 14:17:12 +0000 UTC" firstStartedPulling="2026-01-22 14:17:14.863537006 +0000 UTC m=+115.085621365" lastFinishedPulling="2026-01-22 14:17:55.384142483 +0000 UTC m=+155.606226842" observedRunningTime="2026-01-22 14:17:59.675672633 +0000 UTC m=+159.897757022" watchObservedRunningTime="2026-01-22 14:17:59.676679129 +0000 UTC m=+159.898763488" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.752179 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.874248 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24ffcf63-716b-448f-8ee8-bd42f0a9c192-kube-api-access\") pod \"24ffcf63-716b-448f-8ee8-bd42f0a9c192\" (UID: \"24ffcf63-716b-448f-8ee8-bd42f0a9c192\") " Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.874421 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24ffcf63-716b-448f-8ee8-bd42f0a9c192-kubelet-dir\") pod \"24ffcf63-716b-448f-8ee8-bd42f0a9c192\" (UID: \"24ffcf63-716b-448f-8ee8-bd42f0a9c192\") " Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.874510 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24ffcf63-716b-448f-8ee8-bd42f0a9c192-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "24ffcf63-716b-448f-8ee8-bd42f0a9c192" (UID: "24ffcf63-716b-448f-8ee8-bd42f0a9c192"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.874744 5110 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24ffcf63-716b-448f-8ee8-bd42f0a9c192-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.881971 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ffcf63-716b-448f-8ee8-bd42f0a9c192-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "24ffcf63-716b-448f-8ee8-bd42f0a9c192" (UID: "24ffcf63-716b-448f-8ee8-bd42f0a9c192"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:17:59 crc kubenswrapper[5110]: I0122 14:17:59.976159 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24ffcf63-716b-448f-8ee8-bd42f0a9c192-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.423859 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"24ffcf63-716b-448f-8ee8-bd42f0a9c192","Type":"ContainerDied","Data":"780d76123dfa173d0a539653531d4c9e9ee305a90ca5d43c872cfd347bf63a54"} Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.423898 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="780d76123dfa173d0a539653531d4c9e9ee305a90ca5d43c872cfd347bf63a54" Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.423969 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.427138 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzv2" event={"ID":"d099c037-9022-46df-8a66-e2856ee9dbd9","Type":"ContainerStarted","Data":"2a3585376f58dfda80a1c499c20d35ff254bcc810248cf3f4818892320a95885"} Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.429746 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khw24" event={"ID":"d7eb5f07-02fb-4d06-95f6-e0d5652bae89","Type":"ContainerStarted","Data":"9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258"} Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.431939 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk6fr" event={"ID":"0c5d2009-8141-4816-b0d1-350eaee192ef","Type":"ContainerStarted","Data":"58c73875a2558fceeeca0cfff3502270af75292ff4fdb8d459bf5052873448e8"} Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.489575 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hxzv2" podStartSLOduration=7.8512115609999995 podStartE2EDuration="49.489555526s" podCreationTimestamp="2026-01-22 14:17:11 +0000 UTC" firstStartedPulling="2026-01-22 14:17:14.879799965 +0000 UTC m=+115.101884324" lastFinishedPulling="2026-01-22 14:17:56.51814393 +0000 UTC m=+156.740228289" observedRunningTime="2026-01-22 14:18:00.462046019 +0000 UTC m=+160.684130388" watchObservedRunningTime="2026-01-22 14:18:00.489555526 +0000 UTC m=+160.711639875" Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.491511 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xk6fr" podStartSLOduration=6.927055431 podStartE2EDuration="47.491501037s" podCreationTimestamp="2026-01-22 14:17:13 +0000 UTC" firstStartedPulling="2026-01-22 14:17:15.954932996 +0000 UTC m=+116.177017355" lastFinishedPulling="2026-01-22 14:17:56.519378572 +0000 UTC m=+156.741462961" observedRunningTime="2026-01-22 14:18:00.487867961 +0000 UTC m=+160.709952350" watchObservedRunningTime="2026-01-22 14:18:00.491501037 +0000 UTC m=+160.713585396" Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.517352 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-khw24" podStartSLOduration=7.030669393 podStartE2EDuration="45.517339189s" podCreationTimestamp="2026-01-22 14:17:15 +0000 UTC" firstStartedPulling="2026-01-22 14:17:18.045238837 +0000 UTC m=+118.267323196" lastFinishedPulling="2026-01-22 14:17:56.531908603 +0000 UTC m=+156.753992992" observedRunningTime="2026-01-22 14:18:00.513340564 +0000 UTC m=+160.735424923" watchObservedRunningTime="2026-01-22 14:18:00.517339189 +0000 UTC m=+160.739423538" Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.720012 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.886544 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1f029de-5a24-425c-b850-ddef89e6a5f4-kubelet-dir\") pod \"d1f029de-5a24-425c-b850-ddef89e6a5f4\" (UID: \"d1f029de-5a24-425c-b850-ddef89e6a5f4\") " Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.886604 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1f029de-5a24-425c-b850-ddef89e6a5f4-kube-api-access\") pod \"d1f029de-5a24-425c-b850-ddef89e6a5f4\" (UID: \"d1f029de-5a24-425c-b850-ddef89e6a5f4\") " Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.886924 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1f029de-5a24-425c-b850-ddef89e6a5f4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d1f029de-5a24-425c-b850-ddef89e6a5f4" (UID: "d1f029de-5a24-425c-b850-ddef89e6a5f4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.892982 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f029de-5a24-425c-b850-ddef89e6a5f4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d1f029de-5a24-425c-b850-ddef89e6a5f4" (UID: "d1f029de-5a24-425c-b850-ddef89e6a5f4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.987741 5110 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1f029de-5a24-425c-b850-ddef89e6a5f4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:00 crc kubenswrapper[5110]: I0122 14:18:00.987781 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1f029de-5a24-425c-b850-ddef89e6a5f4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:01 crc kubenswrapper[5110]: I0122 14:18:01.439404 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d1f029de-5a24-425c-b850-ddef89e6a5f4","Type":"ContainerDied","Data":"9a816af61795da879489702b306f62462260ac8926a288178bc5aea068f8897c"} Jan 22 14:18:01 crc kubenswrapper[5110]: I0122 14:18:01.439769 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a816af61795da879489702b306f62462260ac8926a288178bc5aea068f8897c" Jan 22 14:18:01 crc kubenswrapper[5110]: I0122 14:18:01.439681 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.125916 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.125959 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.379852 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.572864 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.572922 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.599020 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.600031 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04a3cce4-5dc1-418d-a112-6d9e30fdbc52" containerName="kube-multus-additional-cni-plugins" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.600065 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a3cce4-5dc1-418d-a112-6d9e30fdbc52" containerName="kube-multus-additional-cni-plugins" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.600082 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1f029de-5a24-425c-b850-ddef89e6a5f4" containerName="pruner" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.600089 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f029de-5a24-425c-b850-ddef89e6a5f4" containerName="pruner" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.600768 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24ffcf63-716b-448f-8ee8-bd42f0a9c192" containerName="pruner" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.600813 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ffcf63-716b-448f-8ee8-bd42f0a9c192" containerName="pruner" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.600997 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="04a3cce4-5dc1-418d-a112-6d9e30fdbc52" containerName="kube-multus-additional-cni-plugins" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.601017 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="24ffcf63-716b-448f-8ee8-bd42f0a9c192" containerName="pruner" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.601026 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="d1f029de-5a24-425c-b850-ddef89e6a5f4" containerName="pruner" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.785529 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.785610 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.785839 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.785861 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.785901 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.786018 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.786390 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.786467 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.788783 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.789302 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.834570 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.910539 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7b0c5f7-6807-40b0-8295-0bac4129a62e-kube-api-access\") pod \"installer-12-crc\" (UID: \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.910592 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7b0c5f7-6807-40b0-8295-0bac4129a62e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:02 crc kubenswrapper[5110]: I0122 14:18:02.910691 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c7b0c5f7-6807-40b0-8295-0bac4129a62e-var-lock\") pod \"installer-12-crc\" (UID: \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:03 crc kubenswrapper[5110]: I0122 14:18:03.011754 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c7b0c5f7-6807-40b0-8295-0bac4129a62e-var-lock\") pod \"installer-12-crc\" (UID: \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:03 crc kubenswrapper[5110]: I0122 14:18:03.011834 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7b0c5f7-6807-40b0-8295-0bac4129a62e-kube-api-access\") pod \"installer-12-crc\" (UID: \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:03 crc kubenswrapper[5110]: I0122 14:18:03.011913 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c7b0c5f7-6807-40b0-8295-0bac4129a62e-var-lock\") pod \"installer-12-crc\" (UID: \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:03 crc kubenswrapper[5110]: I0122 14:18:03.012014 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7b0c5f7-6807-40b0-8295-0bac4129a62e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:03 crc kubenswrapper[5110]: I0122 14:18:03.012294 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7b0c5f7-6807-40b0-8295-0bac4129a62e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:03 crc kubenswrapper[5110]: I0122 14:18:03.030472 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7b0c5f7-6807-40b0-8295-0bac4129a62e-kube-api-access\") pod \"installer-12-crc\" (UID: \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:03 crc kubenswrapper[5110]: I0122 14:18:03.105668 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:03 crc kubenswrapper[5110]: I0122 14:18:03.490560 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:18:03 crc kubenswrapper[5110]: I0122 14:18:03.492145 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:18:03 crc kubenswrapper[5110]: I0122 14:18:03.530025 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 14:18:03 crc kubenswrapper[5110]: W0122 14:18:03.530503 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc7b0c5f7_6807_40b0_8295_0bac4129a62e.slice/crio-3dac329006ea36a23c3563a0e3b7073d477ba3172f26516294360f811f0cb978 WatchSource:0}: Error finding container 3dac329006ea36a23c3563a0e3b7073d477ba3172f26516294360f811f0cb978: Status 404 returned error can't find the container with id 3dac329006ea36a23c3563a0e3b7073d477ba3172f26516294360f811f0cb978 Jan 22 14:18:04 crc kubenswrapper[5110]: I0122 14:18:04.171835 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:18:04 crc kubenswrapper[5110]: I0122 14:18:04.172069 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:18:04 crc kubenswrapper[5110]: I0122 14:18:04.228570 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:18:04 crc kubenswrapper[5110]: I0122 14:18:04.454842 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"c7b0c5f7-6807-40b0-8295-0bac4129a62e","Type":"ContainerStarted","Data":"3dac329006ea36a23c3563a0e3b7073d477ba3172f26516294360f811f0cb978"} Jan 22 14:18:04 crc kubenswrapper[5110]: I0122 14:18:04.551986 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:18:04 crc kubenswrapper[5110]: I0122 14:18:04.552050 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:18:04 crc kubenswrapper[5110]: I0122 14:18:04.764471 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:18:05 crc kubenswrapper[5110]: I0122 14:18:05.484385 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:18:05 crc kubenswrapper[5110]: I0122 14:18:05.487942 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:18:05 crc kubenswrapper[5110]: I0122 14:18:05.488322 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:18:05 crc kubenswrapper[5110]: I0122 14:18:05.495950 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:18:05 crc kubenswrapper[5110]: I0122 14:18:05.551045 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:18:06 crc kubenswrapper[5110]: I0122 14:18:06.072553 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:18:06 crc kubenswrapper[5110]: I0122 14:18:06.072872 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:18:06 crc kubenswrapper[5110]: I0122 14:18:06.114770 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:18:06 crc kubenswrapper[5110]: I0122 14:18:06.467869 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"c7b0c5f7-6807-40b0-8295-0bac4129a62e","Type":"ContainerStarted","Data":"4f72a3c9a256b92529321d831507327fcb262cfeaa32cad1cd00c8c27e61da21"} Jan 22 14:18:06 crc kubenswrapper[5110]: I0122 14:18:06.485614 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=4.485597917 podStartE2EDuration="4.485597917s" podCreationTimestamp="2026-01-22 14:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:18:06.479770873 +0000 UTC m=+166.701855242" watchObservedRunningTime="2026-01-22 14:18:06.485597917 +0000 UTC m=+166.707682266" Jan 22 14:18:06 crc kubenswrapper[5110]: I0122 14:18:06.507797 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:18:06 crc kubenswrapper[5110]: I0122 14:18:06.508124 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:18:06 crc kubenswrapper[5110]: I0122 14:18:06.553420 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rg44w"] Jan 22 14:18:06 crc kubenswrapper[5110]: I0122 14:18:06.554023 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rg44w" podUID="4baaed1f-91ae-4249-96ed-11c5a93986ab" containerName="registry-server" containerID="cri-o://2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34" gracePeriod=2 Jan 22 14:18:06 crc kubenswrapper[5110]: I0122 14:18:06.752754 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v85l6"] Jan 22 14:18:06 crc kubenswrapper[5110]: I0122 14:18:06.753505 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v85l6" podUID="ca4e8bf1-1091-4c8c-b724-9af50f626c94" containerName="registry-server" containerID="cri-o://d026a64aad25e9c870ec43fbfc74d2be6fb7317440a7a30dfb307ca0587ff860" gracePeriod=2 Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.124708 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.187474 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4baaed1f-91ae-4249-96ed-11c5a93986ab-catalog-content\") pod \"4baaed1f-91ae-4249-96ed-11c5a93986ab\" (UID: \"4baaed1f-91ae-4249-96ed-11c5a93986ab\") " Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.187594 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9kbb\" (UniqueName: \"kubernetes.io/projected/4baaed1f-91ae-4249-96ed-11c5a93986ab-kube-api-access-p9kbb\") pod \"4baaed1f-91ae-4249-96ed-11c5a93986ab\" (UID: \"4baaed1f-91ae-4249-96ed-11c5a93986ab\") " Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.187710 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4baaed1f-91ae-4249-96ed-11c5a93986ab-utilities\") pod \"4baaed1f-91ae-4249-96ed-11c5a93986ab\" (UID: \"4baaed1f-91ae-4249-96ed-11c5a93986ab\") " Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.189163 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4baaed1f-91ae-4249-96ed-11c5a93986ab-utilities" (OuterVolumeSpecName: "utilities") pod "4baaed1f-91ae-4249-96ed-11c5a93986ab" (UID: "4baaed1f-91ae-4249-96ed-11c5a93986ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.195293 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4baaed1f-91ae-4249-96ed-11c5a93986ab-kube-api-access-p9kbb" (OuterVolumeSpecName: "kube-api-access-p9kbb") pod "4baaed1f-91ae-4249-96ed-11c5a93986ab" (UID: "4baaed1f-91ae-4249-96ed-11c5a93986ab"). InnerVolumeSpecName "kube-api-access-p9kbb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.289844 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p9kbb\" (UniqueName: \"kubernetes.io/projected/4baaed1f-91ae-4249-96ed-11c5a93986ab-kube-api-access-p9kbb\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.289891 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4baaed1f-91ae-4249-96ed-11c5a93986ab-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.480686 5110 generic.go:358] "Generic (PLEG): container finished" podID="ca4e8bf1-1091-4c8c-b724-9af50f626c94" containerID="d026a64aad25e9c870ec43fbfc74d2be6fb7317440a7a30dfb307ca0587ff860" exitCode=0 Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.480845 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v85l6" event={"ID":"ca4e8bf1-1091-4c8c-b724-9af50f626c94","Type":"ContainerDied","Data":"d026a64aad25e9c870ec43fbfc74d2be6fb7317440a7a30dfb307ca0587ff860"} Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.483711 5110 generic.go:358] "Generic (PLEG): container finished" podID="4baaed1f-91ae-4249-96ed-11c5a93986ab" containerID="2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34" exitCode=0 Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.483765 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg44w" event={"ID":"4baaed1f-91ae-4249-96ed-11c5a93986ab","Type":"ContainerDied","Data":"2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34"} Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.483793 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg44w" event={"ID":"4baaed1f-91ae-4249-96ed-11c5a93986ab","Type":"ContainerDied","Data":"424407489d4d5c287109ae543b702896e9fa174afc1e8b5c8d89c9adeacfc74a"} Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.483814 5110 scope.go:117] "RemoveContainer" containerID="2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.483811 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rg44w" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.502759 5110 scope.go:117] "RemoveContainer" containerID="2596cbe3f2f152915b36a9547f07fcb39da8888bb74c516bc68be36c48af2ba6" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.519472 5110 scope.go:117] "RemoveContainer" containerID="c9faa4f696055c1d02ecc9cb93461ce8f8cbe6dbcbdc9243d4cdc276e8a5d592" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.539012 5110 scope.go:117] "RemoveContainer" containerID="2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34" Jan 22 14:18:08 crc kubenswrapper[5110]: E0122 14:18:08.539435 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34\": container with ID starting with 2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34 not found: ID does not exist" containerID="2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.539465 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34"} err="failed to get container status \"2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34\": rpc error: code = NotFound desc = could not find container \"2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34\": container with ID starting with 2e9f16ffea73a23b43ec1c766556ae6aa458687b5c6e21336f4c38901b3d0d34 not found: ID does not exist" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.539502 5110 scope.go:117] "RemoveContainer" containerID="2596cbe3f2f152915b36a9547f07fcb39da8888bb74c516bc68be36c48af2ba6" Jan 22 14:18:08 crc kubenswrapper[5110]: E0122 14:18:08.540046 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2596cbe3f2f152915b36a9547f07fcb39da8888bb74c516bc68be36c48af2ba6\": container with ID starting with 2596cbe3f2f152915b36a9547f07fcb39da8888bb74c516bc68be36c48af2ba6 not found: ID does not exist" containerID="2596cbe3f2f152915b36a9547f07fcb39da8888bb74c516bc68be36c48af2ba6" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.540064 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2596cbe3f2f152915b36a9547f07fcb39da8888bb74c516bc68be36c48af2ba6"} err="failed to get container status \"2596cbe3f2f152915b36a9547f07fcb39da8888bb74c516bc68be36c48af2ba6\": rpc error: code = NotFound desc = could not find container \"2596cbe3f2f152915b36a9547f07fcb39da8888bb74c516bc68be36c48af2ba6\": container with ID starting with 2596cbe3f2f152915b36a9547f07fcb39da8888bb74c516bc68be36c48af2ba6 not found: ID does not exist" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.540079 5110 scope.go:117] "RemoveContainer" containerID="c9faa4f696055c1d02ecc9cb93461ce8f8cbe6dbcbdc9243d4cdc276e8a5d592" Jan 22 14:18:08 crc kubenswrapper[5110]: E0122 14:18:08.540271 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9faa4f696055c1d02ecc9cb93461ce8f8cbe6dbcbdc9243d4cdc276e8a5d592\": container with ID starting with c9faa4f696055c1d02ecc9cb93461ce8f8cbe6dbcbdc9243d4cdc276e8a5d592 not found: ID does not exist" containerID="c9faa4f696055c1d02ecc9cb93461ce8f8cbe6dbcbdc9243d4cdc276e8a5d592" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.540289 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9faa4f696055c1d02ecc9cb93461ce8f8cbe6dbcbdc9243d4cdc276e8a5d592"} err="failed to get container status \"c9faa4f696055c1d02ecc9cb93461ce8f8cbe6dbcbdc9243d4cdc276e8a5d592\": rpc error: code = NotFound desc = could not find container \"c9faa4f696055c1d02ecc9cb93461ce8f8cbe6dbcbdc9243d4cdc276e8a5d592\": container with ID starting with c9faa4f696055c1d02ecc9cb93461ce8f8cbe6dbcbdc9243d4cdc276e8a5d592 not found: ID does not exist" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.709601 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4baaed1f-91ae-4249-96ed-11c5a93986ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4baaed1f-91ae-4249-96ed-11c5a93986ab" (UID: "4baaed1f-91ae-4249-96ed-11c5a93986ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.796851 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4baaed1f-91ae-4249-96ed-11c5a93986ab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.822268 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rg44w"] Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.826483 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rg44w"] Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.897804 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.982050 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-trs4m"] Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.982605 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-trs4m" podUID="9f6cc7b0-6a05-44c4-baba-0decaa2ad061" containerName="registry-server" containerID="cri-o://1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca" gracePeriod=2 Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.999014 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4e8bf1-1091-4c8c-b724-9af50f626c94-catalog-content\") pod \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\" (UID: \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\") " Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.999063 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4e8bf1-1091-4c8c-b724-9af50f626c94-utilities\") pod \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\" (UID: \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\") " Jan 22 14:18:08 crc kubenswrapper[5110]: I0122 14:18:08.999102 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z228\" (UniqueName: \"kubernetes.io/projected/ca4e8bf1-1091-4c8c-b724-9af50f626c94-kube-api-access-7z228\") pod \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\" (UID: \"ca4e8bf1-1091-4c8c-b724-9af50f626c94\") " Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.007552 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4e8bf1-1091-4c8c-b724-9af50f626c94-utilities" (OuterVolumeSpecName: "utilities") pod "ca4e8bf1-1091-4c8c-b724-9af50f626c94" (UID: "ca4e8bf1-1091-4c8c-b724-9af50f626c94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.008413 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4e8bf1-1091-4c8c-b724-9af50f626c94-kube-api-access-7z228" (OuterVolumeSpecName: "kube-api-access-7z228") pod "ca4e8bf1-1091-4c8c-b724-9af50f626c94" (UID: "ca4e8bf1-1091-4c8c-b724-9af50f626c94"). InnerVolumeSpecName "kube-api-access-7z228". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.022919 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca4e8bf1-1091-4c8c-b724-9af50f626c94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca4e8bf1-1091-4c8c-b724-9af50f626c94" (UID: "ca4e8bf1-1091-4c8c-b724-9af50f626c94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.100550 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca4e8bf1-1091-4c8c-b724-9af50f626c94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.100578 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca4e8bf1-1091-4c8c-b724-9af50f626c94-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.100587 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7z228\" (UniqueName: \"kubernetes.io/projected/ca4e8bf1-1091-4c8c-b724-9af50f626c94-kube-api-access-7z228\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.153146 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khw24"] Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.153613 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-khw24" podUID="d7eb5f07-02fb-4d06-95f6-e0d5652bae89" containerName="registry-server" containerID="cri-o://9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258" gracePeriod=2 Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.349966 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.403223 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpmjm\" (UniqueName: \"kubernetes.io/projected/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-kube-api-access-wpmjm\") pod \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\" (UID: \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\") " Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.410428 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-kube-api-access-wpmjm" (OuterVolumeSpecName: "kube-api-access-wpmjm") pod "9f6cc7b0-6a05-44c4-baba-0decaa2ad061" (UID: "9f6cc7b0-6a05-44c4-baba-0decaa2ad061"). InnerVolumeSpecName "kube-api-access-wpmjm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.491057 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.491796 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v85l6" event={"ID":"ca4e8bf1-1091-4c8c-b724-9af50f626c94","Type":"ContainerDied","Data":"1ced04319b73ea16613f37342d47aa1492e26b96f9ef8ed7caba58e613bbb8d5"} Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.491943 5110 scope.go:117] "RemoveContainer" containerID="d026a64aad25e9c870ec43fbfc74d2be6fb7317440a7a30dfb307ca0587ff860" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.491810 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v85l6" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.496303 5110 generic.go:358] "Generic (PLEG): container finished" podID="d7eb5f07-02fb-4d06-95f6-e0d5652bae89" containerID="9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258" exitCode=0 Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.496386 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khw24" event={"ID":"d7eb5f07-02fb-4d06-95f6-e0d5652bae89","Type":"ContainerDied","Data":"9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258"} Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.496404 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khw24" event={"ID":"d7eb5f07-02fb-4d06-95f6-e0d5652bae89","Type":"ContainerDied","Data":"1e6b683d7cfd8f7638ed40ff3ff46f0e26abf041d98ae5af3762f524b77e2481"} Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.496403 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khw24" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.501798 5110 generic.go:358] "Generic (PLEG): container finished" podID="9f6cc7b0-6a05-44c4-baba-0decaa2ad061" containerID="1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca" exitCode=0 Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.501858 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trs4m" event={"ID":"9f6cc7b0-6a05-44c4-baba-0decaa2ad061","Type":"ContainerDied","Data":"1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca"} Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.501889 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trs4m" event={"ID":"9f6cc7b0-6a05-44c4-baba-0decaa2ad061","Type":"ContainerDied","Data":"d0f8b61bdc57fd3aad2497fb7def94cb9114386b2177a61fe93b1e3b5f996f82"} Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.501987 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trs4m" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.505457 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-catalog-content\") pod \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\" (UID: \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\") " Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.505552 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-catalog-content\") pod \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\" (UID: \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\") " Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.505574 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-utilities\") pod \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\" (UID: \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\") " Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.505606 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xz4z\" (UniqueName: \"kubernetes.io/projected/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-kube-api-access-8xz4z\") pod \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\" (UID: \"d7eb5f07-02fb-4d06-95f6-e0d5652bae89\") " Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.505660 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-utilities\") pod \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\" (UID: \"9f6cc7b0-6a05-44c4-baba-0decaa2ad061\") " Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.505975 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wpmjm\" (UniqueName: \"kubernetes.io/projected/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-kube-api-access-wpmjm\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.511270 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-utilities" (OuterVolumeSpecName: "utilities") pod "9f6cc7b0-6a05-44c4-baba-0decaa2ad061" (UID: "9f6cc7b0-6a05-44c4-baba-0decaa2ad061"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.518023 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-utilities" (OuterVolumeSpecName: "utilities") pod "d7eb5f07-02fb-4d06-95f6-e0d5652bae89" (UID: "d7eb5f07-02fb-4d06-95f6-e0d5652bae89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.535544 5110 scope.go:117] "RemoveContainer" containerID="ed22d452b1354b674a09fc055330c28b8b1cc9b4b72915c5663399c5b8b5bd2b" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.548079 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f6cc7b0-6a05-44c4-baba-0decaa2ad061" (UID: "9f6cc7b0-6a05-44c4-baba-0decaa2ad061"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.551067 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-kube-api-access-8xz4z" (OuterVolumeSpecName: "kube-api-access-8xz4z") pod "d7eb5f07-02fb-4d06-95f6-e0d5652bae89" (UID: "d7eb5f07-02fb-4d06-95f6-e0d5652bae89"). InnerVolumeSpecName "kube-api-access-8xz4z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.570878 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v85l6"] Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.572686 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v85l6"] Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.574958 5110 scope.go:117] "RemoveContainer" containerID="04c073bc62cf780fe78979bd3e0ad17efe827b4d851bff105628ec8db1ac16ec" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.594050 5110 scope.go:117] "RemoveContainer" containerID="9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.606756 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.606787 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.606801 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8xz4z\" (UniqueName: \"kubernetes.io/projected/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-kube-api-access-8xz4z\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.606835 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6cc7b0-6a05-44c4-baba-0decaa2ad061-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.614016 5110 scope.go:117] "RemoveContainer" containerID="c8f662670a503d1e755406c28d86377a7fe9dce90431d238f57bd35d2c4d5345" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.632496 5110 scope.go:117] "RemoveContainer" containerID="51534cfb7608ed2de01cf801974da77f74ae000d9aad4ba2c651b91c64aeac6a" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.648013 5110 scope.go:117] "RemoveContainer" containerID="9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258" Jan 22 14:18:09 crc kubenswrapper[5110]: E0122 14:18:09.648503 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258\": container with ID starting with 9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258 not found: ID does not exist" containerID="9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.648632 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258"} err="failed to get container status \"9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258\": rpc error: code = NotFound desc = could not find container \"9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258\": container with ID starting with 9aae7ef004e6b61b8443acda9726d680a0f0e5754cd102ac6c332365055cd258 not found: ID does not exist" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.648742 5110 scope.go:117] "RemoveContainer" containerID="c8f662670a503d1e755406c28d86377a7fe9dce90431d238f57bd35d2c4d5345" Jan 22 14:18:09 crc kubenswrapper[5110]: E0122 14:18:09.649159 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f662670a503d1e755406c28d86377a7fe9dce90431d238f57bd35d2c4d5345\": container with ID starting with c8f662670a503d1e755406c28d86377a7fe9dce90431d238f57bd35d2c4d5345 not found: ID does not exist" containerID="c8f662670a503d1e755406c28d86377a7fe9dce90431d238f57bd35d2c4d5345" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.649200 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f662670a503d1e755406c28d86377a7fe9dce90431d238f57bd35d2c4d5345"} err="failed to get container status \"c8f662670a503d1e755406c28d86377a7fe9dce90431d238f57bd35d2c4d5345\": rpc error: code = NotFound desc = could not find container \"c8f662670a503d1e755406c28d86377a7fe9dce90431d238f57bd35d2c4d5345\": container with ID starting with c8f662670a503d1e755406c28d86377a7fe9dce90431d238f57bd35d2c4d5345 not found: ID does not exist" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.649227 5110 scope.go:117] "RemoveContainer" containerID="51534cfb7608ed2de01cf801974da77f74ae000d9aad4ba2c651b91c64aeac6a" Jan 22 14:18:09 crc kubenswrapper[5110]: E0122 14:18:09.649533 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51534cfb7608ed2de01cf801974da77f74ae000d9aad4ba2c651b91c64aeac6a\": container with ID starting with 51534cfb7608ed2de01cf801974da77f74ae000d9aad4ba2c651b91c64aeac6a not found: ID does not exist" containerID="51534cfb7608ed2de01cf801974da77f74ae000d9aad4ba2c651b91c64aeac6a" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.649645 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51534cfb7608ed2de01cf801974da77f74ae000d9aad4ba2c651b91c64aeac6a"} err="failed to get container status \"51534cfb7608ed2de01cf801974da77f74ae000d9aad4ba2c651b91c64aeac6a\": rpc error: code = NotFound desc = could not find container \"51534cfb7608ed2de01cf801974da77f74ae000d9aad4ba2c651b91c64aeac6a\": container with ID starting with 51534cfb7608ed2de01cf801974da77f74ae000d9aad4ba2c651b91c64aeac6a not found: ID does not exist" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.649747 5110 scope.go:117] "RemoveContainer" containerID="1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.659093 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7eb5f07-02fb-4d06-95f6-e0d5652bae89" (UID: "d7eb5f07-02fb-4d06-95f6-e0d5652bae89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.661690 5110 scope.go:117] "RemoveContainer" containerID="30cb93666c4e2ff4bc921056f9bcb90b995bc2c2111320c3bd70dc6ad991016f" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.676040 5110 scope.go:117] "RemoveContainer" containerID="cc34c44c234ed7625e72b04a55b156b51b3e577fde7629885b017f8b0f35c1e2" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.689656 5110 scope.go:117] "RemoveContainer" containerID="1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca" Jan 22 14:18:09 crc kubenswrapper[5110]: E0122 14:18:09.690044 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca\": container with ID starting with 1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca not found: ID does not exist" containerID="1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.690099 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca"} err="failed to get container status \"1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca\": rpc error: code = NotFound desc = could not find container \"1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca\": container with ID starting with 1b68d5df452b6e05727d786eee1a756eb5bbe4263065c029d284c24092b65fca not found: ID does not exist" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.690125 5110 scope.go:117] "RemoveContainer" containerID="30cb93666c4e2ff4bc921056f9bcb90b995bc2c2111320c3bd70dc6ad991016f" Jan 22 14:18:09 crc kubenswrapper[5110]: E0122 14:18:09.690429 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30cb93666c4e2ff4bc921056f9bcb90b995bc2c2111320c3bd70dc6ad991016f\": container with ID starting with 30cb93666c4e2ff4bc921056f9bcb90b995bc2c2111320c3bd70dc6ad991016f not found: ID does not exist" containerID="30cb93666c4e2ff4bc921056f9bcb90b995bc2c2111320c3bd70dc6ad991016f" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.690550 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30cb93666c4e2ff4bc921056f9bcb90b995bc2c2111320c3bd70dc6ad991016f"} err="failed to get container status \"30cb93666c4e2ff4bc921056f9bcb90b995bc2c2111320c3bd70dc6ad991016f\": rpc error: code = NotFound desc = could not find container \"30cb93666c4e2ff4bc921056f9bcb90b995bc2c2111320c3bd70dc6ad991016f\": container with ID starting with 30cb93666c4e2ff4bc921056f9bcb90b995bc2c2111320c3bd70dc6ad991016f not found: ID does not exist" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.690577 5110 scope.go:117] "RemoveContainer" containerID="cc34c44c234ed7625e72b04a55b156b51b3e577fde7629885b017f8b0f35c1e2" Jan 22 14:18:09 crc kubenswrapper[5110]: E0122 14:18:09.691024 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc34c44c234ed7625e72b04a55b156b51b3e577fde7629885b017f8b0f35c1e2\": container with ID starting with cc34c44c234ed7625e72b04a55b156b51b3e577fde7629885b017f8b0f35c1e2 not found: ID does not exist" containerID="cc34c44c234ed7625e72b04a55b156b51b3e577fde7629885b017f8b0f35c1e2" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.691045 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc34c44c234ed7625e72b04a55b156b51b3e577fde7629885b017f8b0f35c1e2"} err="failed to get container status \"cc34c44c234ed7625e72b04a55b156b51b3e577fde7629885b017f8b0f35c1e2\": rpc error: code = NotFound desc = could not find container \"cc34c44c234ed7625e72b04a55b156b51b3e577fde7629885b017f8b0f35c1e2\": container with ID starting with cc34c44c234ed7625e72b04a55b156b51b3e577fde7629885b017f8b0f35c1e2 not found: ID does not exist" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.708080 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7eb5f07-02fb-4d06-95f6-e0d5652bae89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.840133 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-trs4m"] Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.843494 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-trs4m"] Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.845722 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khw24"] Jan 22 14:18:09 crc kubenswrapper[5110]: I0122 14:18:09.847944 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-khw24"] Jan 22 14:18:10 crc kubenswrapper[5110]: I0122 14:18:10.281983 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4baaed1f-91ae-4249-96ed-11c5a93986ab" path="/var/lib/kubelet/pods/4baaed1f-91ae-4249-96ed-11c5a93986ab/volumes" Jan 22 14:18:10 crc kubenswrapper[5110]: I0122 14:18:10.283190 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f6cc7b0-6a05-44c4-baba-0decaa2ad061" path="/var/lib/kubelet/pods/9f6cc7b0-6a05-44c4-baba-0decaa2ad061/volumes" Jan 22 14:18:10 crc kubenswrapper[5110]: I0122 14:18:10.284106 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4e8bf1-1091-4c8c-b724-9af50f626c94" path="/var/lib/kubelet/pods/ca4e8bf1-1091-4c8c-b724-9af50f626c94/volumes" Jan 22 14:18:10 crc kubenswrapper[5110]: I0122 14:18:10.285271 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7eb5f07-02fb-4d06-95f6-e0d5652bae89" path="/var/lib/kubelet/pods/d7eb5f07-02fb-4d06-95f6-e0d5652bae89/volumes" Jan 22 14:18:12 crc kubenswrapper[5110]: I0122 14:18:12.493005 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:18:13 crc kubenswrapper[5110]: I0122 14:18:13.486848 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:18:19 crc kubenswrapper[5110]: I0122 14:18:19.426555 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:18:29 crc kubenswrapper[5110]: I0122 14:18:29.424806 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 14:18:33 crc kubenswrapper[5110]: I0122 14:18:33.460649 5110 ???:1] "http: TLS handshake error from 192.168.126.11:49554: no serving certificate available for the kubelet" Jan 22 14:18:34 crc kubenswrapper[5110]: I0122 14:18:34.997373 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-pgzmf"] Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.592551 5110 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.594920 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca4e8bf1-1091-4c8c-b724-9af50f626c94" containerName="extract-content" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.594967 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4e8bf1-1091-4c8c-b724-9af50f626c94" containerName="extract-content" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.594992 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7eb5f07-02fb-4d06-95f6-e0d5652bae89" containerName="extract-content" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595000 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7eb5f07-02fb-4d06-95f6-e0d5652bae89" containerName="extract-content" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595019 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7eb5f07-02fb-4d06-95f6-e0d5652bae89" containerName="extract-utilities" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595026 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7eb5f07-02fb-4d06-95f6-e0d5652bae89" containerName="extract-utilities" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595047 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f6cc7b0-6a05-44c4-baba-0decaa2ad061" containerName="extract-utilities" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595054 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6cc7b0-6a05-44c4-baba-0decaa2ad061" containerName="extract-utilities" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595063 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f6cc7b0-6a05-44c4-baba-0decaa2ad061" containerName="registry-server" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595070 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6cc7b0-6a05-44c4-baba-0decaa2ad061" containerName="registry-server" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595078 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4baaed1f-91ae-4249-96ed-11c5a93986ab" containerName="extract-utilities" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595086 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4baaed1f-91ae-4249-96ed-11c5a93986ab" containerName="extract-utilities" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595097 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f6cc7b0-6a05-44c4-baba-0decaa2ad061" containerName="extract-content" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595104 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6cc7b0-6a05-44c4-baba-0decaa2ad061" containerName="extract-content" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595112 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7eb5f07-02fb-4d06-95f6-e0d5652bae89" containerName="registry-server" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595119 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7eb5f07-02fb-4d06-95f6-e0d5652bae89" containerName="registry-server" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595131 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca4e8bf1-1091-4c8c-b724-9af50f626c94" containerName="extract-utilities" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595139 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4e8bf1-1091-4c8c-b724-9af50f626c94" containerName="extract-utilities" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595148 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4baaed1f-91ae-4249-96ed-11c5a93986ab" containerName="registry-server" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595154 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4baaed1f-91ae-4249-96ed-11c5a93986ab" containerName="registry-server" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595165 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca4e8bf1-1091-4c8c-b724-9af50f626c94" containerName="registry-server" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595172 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4e8bf1-1091-4c8c-b724-9af50f626c94" containerName="registry-server" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595180 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4baaed1f-91ae-4249-96ed-11c5a93986ab" containerName="extract-content" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595186 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="4baaed1f-91ae-4249-96ed-11c5a93986ab" containerName="extract-content" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595289 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="4baaed1f-91ae-4249-96ed-11c5a93986ab" containerName="registry-server" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595306 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7eb5f07-02fb-4d06-95f6-e0d5652bae89" containerName="registry-server" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595320 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="ca4e8bf1-1091-4c8c-b724-9af50f626c94" containerName="registry-server" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.595329 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="9f6cc7b0-6a05-44c4-baba-0decaa2ad061" containerName="registry-server" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.609769 5110 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.609835 5110 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.609989 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.610672 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572" gracePeriod=15 Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.610800 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c" gracePeriod=15 Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.610845 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.610868 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.610884 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.610892 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.610898 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f" gracePeriod=15 Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.610925 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5e10fa56c9f05a69f1fa1ac51c4fbf617c845aba6e15a4fc67169d365931259c" gracePeriod=15 Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.610903 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.610975 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611003 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611012 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611041 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.610957 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7" gracePeriod=15 Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611050 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611166 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611175 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611183 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611189 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611200 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611208 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611310 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611323 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611334 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611342 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611350 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611358 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611366 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611475 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611483 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611493 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611499 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611607 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.611647 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.615730 5110 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.634300 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.652232 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.663965 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.664017 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.664049 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.664085 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.664118 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.664141 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.664180 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.664200 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.664219 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.664264 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.765280 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.765363 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.765399 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.765422 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.765540 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.765568 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.765650 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.765753 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.766148 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.766305 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.766416 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.766475 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.766544 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.766613 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.766643 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.766730 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.766908 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.766975 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.767062 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.767136 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: I0122 14:18:43.943397 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:18:43 crc kubenswrapper[5110]: E0122 14:18:43.964695 5110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.17:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d135e350881d2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:18:43.964076498 +0000 UTC m=+204.186160857,LastTimestamp:2026-01-22 14:18:43.964076498 +0000 UTC m=+204.186160857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.700223 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.702948 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.704106 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="5e10fa56c9f05a69f1fa1ac51c4fbf617c845aba6e15a4fc67169d365931259c" exitCode=0 Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.704226 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c" exitCode=0 Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.704319 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f" exitCode=0 Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.704440 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7" exitCode=2 Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.704234 5110 scope.go:117] "RemoveContainer" containerID="d92e709029c46faaa9145ae147eb23298fd350281c9da48e17ca1a5fe7b5de07" Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.707522 5110 generic.go:358] "Generic (PLEG): container finished" podID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" containerID="4f72a3c9a256b92529321d831507327fcb262cfeaa32cad1cd00c8c27e61da21" exitCode=0 Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.707689 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"c7b0c5f7-6807-40b0-8295-0bac4129a62e","Type":"ContainerDied","Data":"4f72a3c9a256b92529321d831507327fcb262cfeaa32cad1cd00c8c27e61da21"} Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.709010 5110 status_manager.go:895] "Failed to get status for pod" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.709474 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.709872 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"8bc34df2e378d09e54395cc75c857692fceff35e64815393130f8a51b227429f"} Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.709927 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"a102f204037c08018eaeaa63af02001e58b75228a803f99b425b07864be8aac7"} Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.710886 5110 status_manager.go:895] "Failed to get status for pod" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:44 crc kubenswrapper[5110]: I0122 14:18:44.711490 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:45 crc kubenswrapper[5110]: E0122 14:18:45.363275 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:45 crc kubenswrapper[5110]: E0122 14:18:45.363874 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:45 crc kubenswrapper[5110]: E0122 14:18:45.364207 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:45 crc kubenswrapper[5110]: E0122 14:18:45.364464 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:45 crc kubenswrapper[5110]: E0122 14:18:45.364733 5110 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:45 crc kubenswrapper[5110]: I0122 14:18:45.364760 5110 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 14:18:45 crc kubenswrapper[5110]: E0122 14:18:45.365002 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" interval="200ms" Jan 22 14:18:45 crc kubenswrapper[5110]: E0122 14:18:45.566589 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" interval="400ms" Jan 22 14:18:45 crc kubenswrapper[5110]: I0122 14:18:45.727409 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 14:18:45 crc kubenswrapper[5110]: E0122 14:18:45.968309 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" interval="800ms" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.027831 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.028764 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.029266 5110 status_manager.go:895] "Failed to get status for pod" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.029607 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.030569 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.030655 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.031043 5110 status_manager.go:895] "Failed to get status for pod" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.031307 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.031828 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.097943 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c7b0c5f7-6807-40b0-8295-0bac4129a62e-var-lock\") pod \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\" (UID: \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\") " Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098011 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098057 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7b0c5f7-6807-40b0-8295-0bac4129a62e-kubelet-dir\") pod \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\" (UID: \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\") " Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098043 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7b0c5f7-6807-40b0-8295-0bac4129a62e-var-lock" (OuterVolumeSpecName: "var-lock") pod "c7b0c5f7-6807-40b0-8295-0bac4129a62e" (UID: "c7b0c5f7-6807-40b0-8295-0bac4129a62e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098069 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098119 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098140 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098162 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7b0c5f7-6807-40b0-8295-0bac4129a62e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c7b0c5f7-6807-40b0-8295-0bac4129a62e" (UID: "c7b0c5f7-6807-40b0-8295-0bac4129a62e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098158 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098248 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098310 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098357 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7b0c5f7-6807-40b0-8295-0bac4129a62e-kube-api-access\") pod \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\" (UID: \"c7b0c5f7-6807-40b0-8295-0bac4129a62e\") " Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098375 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098677 5110 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098689 5110 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c7b0c5f7-6807-40b0-8295-0bac4129a62e-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098698 5110 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098706 5110 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7b0c5f7-6807-40b0-8295-0bac4129a62e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098713 5110 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.098836 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.102014 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.107221 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b0c5f7-6807-40b0-8295-0bac4129a62e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c7b0c5f7-6807-40b0-8295-0bac4129a62e" (UID: "c7b0c5f7-6807-40b0-8295-0bac4129a62e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.199979 5110 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.200014 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7b0c5f7-6807-40b0-8295-0bac4129a62e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.200025 5110 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.281830 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.737704 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.738914 5110 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572" exitCode=0 Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.739031 5110 scope.go:117] "RemoveContainer" containerID="5e10fa56c9f05a69f1fa1ac51c4fbf617c845aba6e15a4fc67169d365931259c" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.739072 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.739946 5110 status_manager.go:895] "Failed to get status for pod" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.740104 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.740348 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.743421 5110 status_manager.go:895] "Failed to get status for pod" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.743663 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.744254 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.745330 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"c7b0c5f7-6807-40b0-8295-0bac4129a62e","Type":"ContainerDied","Data":"3dac329006ea36a23c3563a0e3b7073d477ba3172f26516294360f811f0cb978"} Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.745357 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dac329006ea36a23c3563a0e3b7073d477ba3172f26516294360f811f0cb978" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.745467 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.750010 5110 status_manager.go:895] "Failed to get status for pod" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.750330 5110 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.750535 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.759734 5110 scope.go:117] "RemoveContainer" containerID="3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c" Jan 22 14:18:46 crc kubenswrapper[5110]: E0122 14:18:46.769511 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" interval="1.6s" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.774476 5110 scope.go:117] "RemoveContainer" containerID="7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.804142 5110 scope.go:117] "RemoveContainer" containerID="e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.822225 5110 scope.go:117] "RemoveContainer" containerID="872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.837478 5110 scope.go:117] "RemoveContainer" containerID="f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.908056 5110 scope.go:117] "RemoveContainer" containerID="5e10fa56c9f05a69f1fa1ac51c4fbf617c845aba6e15a4fc67169d365931259c" Jan 22 14:18:46 crc kubenswrapper[5110]: E0122 14:18:46.908704 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e10fa56c9f05a69f1fa1ac51c4fbf617c845aba6e15a4fc67169d365931259c\": container with ID starting with 5e10fa56c9f05a69f1fa1ac51c4fbf617c845aba6e15a4fc67169d365931259c not found: ID does not exist" containerID="5e10fa56c9f05a69f1fa1ac51c4fbf617c845aba6e15a4fc67169d365931259c" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.908748 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e10fa56c9f05a69f1fa1ac51c4fbf617c845aba6e15a4fc67169d365931259c"} err="failed to get container status \"5e10fa56c9f05a69f1fa1ac51c4fbf617c845aba6e15a4fc67169d365931259c\": rpc error: code = NotFound desc = could not find container \"5e10fa56c9f05a69f1fa1ac51c4fbf617c845aba6e15a4fc67169d365931259c\": container with ID starting with 5e10fa56c9f05a69f1fa1ac51c4fbf617c845aba6e15a4fc67169d365931259c not found: ID does not exist" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.908795 5110 scope.go:117] "RemoveContainer" containerID="3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c" Jan 22 14:18:46 crc kubenswrapper[5110]: E0122 14:18:46.909592 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c\": container with ID starting with 3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c not found: ID does not exist" containerID="3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.909686 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c"} err="failed to get container status \"3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c\": rpc error: code = NotFound desc = could not find container \"3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c\": container with ID starting with 3614847fd33273d653fb4a663693171aff2ceb59eaba786247135909473a1a9c not found: ID does not exist" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.909707 5110 scope.go:117] "RemoveContainer" containerID="7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f" Jan 22 14:18:46 crc kubenswrapper[5110]: E0122 14:18:46.910958 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f\": container with ID starting with 7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f not found: ID does not exist" containerID="7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.910985 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f"} err="failed to get container status \"7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f\": rpc error: code = NotFound desc = could not find container \"7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f\": container with ID starting with 7d2c08e39879f53e21ccd4ecb20d62edd65c7f5acd2c2025f318ee2f6eef942f not found: ID does not exist" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.911000 5110 scope.go:117] "RemoveContainer" containerID="e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7" Jan 22 14:18:46 crc kubenswrapper[5110]: E0122 14:18:46.911600 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7\": container with ID starting with e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7 not found: ID does not exist" containerID="e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.911647 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7"} err="failed to get container status \"e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7\": rpc error: code = NotFound desc = could not find container \"e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7\": container with ID starting with e4e44e7b63df2eabb368c778283fa297c5627baef12f749319188b018933e4a7 not found: ID does not exist" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.911686 5110 scope.go:117] "RemoveContainer" containerID="872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572" Jan 22 14:18:46 crc kubenswrapper[5110]: E0122 14:18:46.911994 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572\": container with ID starting with 872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572 not found: ID does not exist" containerID="872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.912043 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572"} err="failed to get container status \"872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572\": rpc error: code = NotFound desc = could not find container \"872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572\": container with ID starting with 872c2dac50c7f7b271cfb6e7cdfa4d5e922da39ef42941678469947454b1e572 not found: ID does not exist" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.912061 5110 scope.go:117] "RemoveContainer" containerID="f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096" Jan 22 14:18:46 crc kubenswrapper[5110]: E0122 14:18:46.912417 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096\": container with ID starting with f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096 not found: ID does not exist" containerID="f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096" Jan 22 14:18:46 crc kubenswrapper[5110]: I0122 14:18:46.912441 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096"} err="failed to get container status \"f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096\": rpc error: code = NotFound desc = could not find container \"f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096\": container with ID starting with f27cdc5021a20b7752733cb6527b8791d142b8cdeed6c285d545ed03cf8d8096 not found: ID does not exist" Jan 22 14:18:48 crc kubenswrapper[5110]: E0122 14:18:48.371038 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" interval="3.2s" Jan 22 14:18:49 crc kubenswrapper[5110]: I0122 14:18:49.691303 5110 patch_prober.go:28] interesting pod/machine-config-daemon-grf5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:18:49 crc kubenswrapper[5110]: I0122 14:18:49.691714 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podUID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:18:50 crc kubenswrapper[5110]: I0122 14:18:50.276950 5110 status_manager.go:895] "Failed to get status for pod" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:50 crc kubenswrapper[5110]: I0122 14:18:50.277358 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:51 crc kubenswrapper[5110]: E0122 14:18:51.306052 5110 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.17:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d135e350881d2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 14:18:43.964076498 +0000 UTC m=+204.186160857,LastTimestamp:2026-01-22 14:18:43.964076498 +0000 UTC m=+204.186160857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 14:18:51 crc kubenswrapper[5110]: E0122 14:18:51.572303 5110 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.17:6443: connect: connection refused" interval="6.4s" Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.272919 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.274913 5110 status_manager.go:895] "Failed to get status for pod" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.275457 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.295262 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="869455b8-f444-4ee3-9a9a-c737007425b5" Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.295291 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="869455b8-f444-4ee3-9a9a-c737007425b5" Jan 22 14:18:55 crc kubenswrapper[5110]: E0122 14:18:55.295778 5110 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.296288 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.808966 5110 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="a9c5ad8ab2abb7485e664d3e7a647937109f789afa1a1f5931f7e841cf3f61b5" exitCode=0 Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.809065 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"a9c5ad8ab2abb7485e664d3e7a647937109f789afa1a1f5931f7e841cf3f61b5"} Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.809689 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"2d3297b6a29039f73b54edec2f51469dd43c1390c4d3d1c0464d948c48709ad8"} Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.810094 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="869455b8-f444-4ee3-9a9a-c737007425b5" Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.810123 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="869455b8-f444-4ee3-9a9a-c737007425b5" Jan 22 14:18:55 crc kubenswrapper[5110]: E0122 14:18:55.810588 5110 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.810847 5110 status_manager.go:895] "Failed to get status for pod" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:55 crc kubenswrapper[5110]: I0122 14:18:55.811376 5110 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.17:6443: connect: connection refused" Jan 22 14:18:56 crc kubenswrapper[5110]: I0122 14:18:56.839947 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"7474a8399c18b3dc7c89fc0212d67fe1e3005e934cab05f57fe5d412381ab509"} Jan 22 14:18:56 crc kubenswrapper[5110]: I0122 14:18:56.839992 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"236a7ad1b334a8a64b6300e958f0293e10651a0a311ca10b1a92ab61d14c6c10"} Jan 22 14:18:56 crc kubenswrapper[5110]: I0122 14:18:56.840000 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"0ad4855708f1b67cef34a1226fe719340d06bdcafe9faf2f2c573af5c41e868f"} Jan 22 14:18:57 crc kubenswrapper[5110]: I0122 14:18:57.860138 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"32b7e1fab9d8cf991cc197e33a514ef02913c4465484a2d4de9431db4e8ecb43"} Jan 22 14:18:57 crc kubenswrapper[5110]: I0122 14:18:57.860447 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"5e39ce6a0c0859c621397f32db4474aa8d9336d5637b4109e76dac8fd995283b"} Jan 22 14:18:57 crc kubenswrapper[5110]: I0122 14:18:57.860472 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:18:57 crc kubenswrapper[5110]: I0122 14:18:57.860370 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="869455b8-f444-4ee3-9a9a-c737007425b5" Jan 22 14:18:57 crc kubenswrapper[5110]: I0122 14:18:57.860494 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="869455b8-f444-4ee3-9a9a-c737007425b5" Jan 22 14:18:58 crc kubenswrapper[5110]: I0122 14:18:58.869699 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 14:18:58 crc kubenswrapper[5110]: I0122 14:18:58.869762 5110 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="4d58e05af32372135da721ce73766f6ea10f366c5c1b13a318803ff65725c882" exitCode=1 Jan 22 14:18:58 crc kubenswrapper[5110]: I0122 14:18:58.869805 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"4d58e05af32372135da721ce73766f6ea10f366c5c1b13a318803ff65725c882"} Jan 22 14:18:58 crc kubenswrapper[5110]: I0122 14:18:58.870704 5110 scope.go:117] "RemoveContainer" containerID="4d58e05af32372135da721ce73766f6ea10f366c5c1b13a318803ff65725c882" Jan 22 14:18:59 crc kubenswrapper[5110]: I0122 14:18:59.882099 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 14:18:59 crc kubenswrapper[5110]: I0122 14:18:59.883321 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"729c7e63c92a71ca30e9438f1c8ab959275423b9b97d92aa5e1b0663455da83f"} Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.028489 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" podUID="aa774b1c-ef48-4bd8-a4df-d3b963e547e6" containerName="oauth-openshift" containerID="cri-o://a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de" gracePeriod=15 Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.294455 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.297896 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.299202 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.308833 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.434746 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.589477 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-idp-0-file-data\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.589585 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-service-ca\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.589709 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-serving-cert\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.589774 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-trusted-ca-bundle\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.589822 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-ocp-branding-template\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.589892 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2szb\" (UniqueName: \"kubernetes.io/projected/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-kube-api-access-t2szb\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.589978 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-audit-dir\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.590041 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-router-certs\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.590072 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-audit-policies\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.590174 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-cliconfig\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.590216 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-login\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.590258 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-error\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.590561 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-session\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.590714 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-provider-selection\") pod \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\" (UID: \"aa774b1c-ef48-4bd8-a4df-d3b963e547e6\") " Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.592304 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.593410 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.593515 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.594062 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.594905 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.597890 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.598132 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.599020 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.599300 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.599557 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.602871 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.603220 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-kube-api-access-t2szb" (OuterVolumeSpecName: "kube-api-access-t2szb") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "kube-api-access-t2szb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.603426 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.603632 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "aa774b1c-ef48-4bd8-a4df-d3b963e547e6" (UID: "aa774b1c-ef48-4bd8-a4df-d3b963e547e6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.692906 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.692963 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t2szb\" (UniqueName: \"kubernetes.io/projected/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-kube-api-access-t2szb\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.692984 5110 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.693003 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.693021 5110 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.693041 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.693064 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.693093 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.693121 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.693148 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.693177 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.693203 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.693229 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.693336 5110 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa774b1c-ef48-4bd8-a4df-d3b963e547e6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.892488 5110 generic.go:358] "Generic (PLEG): container finished" podID="aa774b1c-ef48-4bd8-a4df-d3b963e547e6" containerID="a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de" exitCode=0 Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.892557 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" event={"ID":"aa774b1c-ef48-4bd8-a4df-d3b963e547e6","Type":"ContainerDied","Data":"a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de"} Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.892608 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.892657 5110 scope.go:117] "RemoveContainer" containerID="a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.892638 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-pgzmf" event={"ID":"aa774b1c-ef48-4bd8-a4df-d3b963e547e6","Type":"ContainerDied","Data":"38c1b4321b1bfe5aa7faa9d9538280326ef5670bad832a44cf113f418d9744dd"} Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.921837 5110 scope.go:117] "RemoveContainer" containerID="a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de" Jan 22 14:19:00 crc kubenswrapper[5110]: E0122 14:19:00.927270 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de\": container with ID starting with a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de not found: ID does not exist" containerID="a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de" Jan 22 14:19:00 crc kubenswrapper[5110]: I0122 14:19:00.927318 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de"} err="failed to get container status \"a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de\": rpc error: code = NotFound desc = could not find container \"a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de\": container with ID starting with a2f3db0644662ca5cac96e5e430af076429ab84bc25e86ff668f51fb2272f0de not found: ID does not exist" Jan 22 14:19:02 crc kubenswrapper[5110]: I0122 14:19:02.870155 5110 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:19:02 crc kubenswrapper[5110]: I0122 14:19:02.870489 5110 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:19:02 crc kubenswrapper[5110]: I0122 14:19:02.906914 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="869455b8-f444-4ee3-9a9a-c737007425b5" Jan 22 14:19:02 crc kubenswrapper[5110]: I0122 14:19:02.906951 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="869455b8-f444-4ee3-9a9a-c737007425b5" Jan 22 14:19:02 crc kubenswrapper[5110]: I0122 14:19:02.912243 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:19:02 crc kubenswrapper[5110]: I0122 14:19:02.961305 5110 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="9a6f12ca-df6a-4abd-9881-8cebff6f4fd5" Jan 22 14:19:03 crc kubenswrapper[5110]: I0122 14:19:03.912745 5110 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="869455b8-f444-4ee3-9a9a-c737007425b5" Jan 22 14:19:03 crc kubenswrapper[5110]: I0122 14:19:03.912923 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="869455b8-f444-4ee3-9a9a-c737007425b5" Jan 22 14:19:03 crc kubenswrapper[5110]: I0122 14:19:03.916418 5110 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="9a6f12ca-df6a-4abd-9881-8cebff6f4fd5" Jan 22 14:19:07 crc kubenswrapper[5110]: I0122 14:19:07.199691 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:19:07 crc kubenswrapper[5110]: I0122 14:19:07.199787 5110 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 14:19:07 crc kubenswrapper[5110]: I0122 14:19:07.200105 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 14:19:12 crc kubenswrapper[5110]: I0122 14:19:12.061771 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 22 14:19:12 crc kubenswrapper[5110]: I0122 14:19:12.188957 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:12 crc kubenswrapper[5110]: I0122 14:19:12.236770 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 22 14:19:13 crc kubenswrapper[5110]: I0122 14:19:13.303046 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 22 14:19:13 crc kubenswrapper[5110]: I0122 14:19:13.924104 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 22 14:19:14 crc kubenswrapper[5110]: I0122 14:19:14.566713 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 22 14:19:14 crc kubenswrapper[5110]: I0122 14:19:14.745810 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 22 14:19:14 crc kubenswrapper[5110]: I0122 14:19:14.782781 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:14 crc kubenswrapper[5110]: I0122 14:19:14.784544 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 22 14:19:15 crc kubenswrapper[5110]: I0122 14:19:15.000074 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 22 14:19:15 crc kubenswrapper[5110]: I0122 14:19:15.107072 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:15 crc kubenswrapper[5110]: I0122 14:19:15.180584 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 22 14:19:15 crc kubenswrapper[5110]: I0122 14:19:15.534415 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 22 14:19:15 crc kubenswrapper[5110]: I0122 14:19:15.551458 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 22 14:19:15 crc kubenswrapper[5110]: I0122 14:19:15.636925 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 22 14:19:15 crc kubenswrapper[5110]: I0122 14:19:15.638931 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 22 14:19:15 crc kubenswrapper[5110]: I0122 14:19:15.722983 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 22 14:19:15 crc kubenswrapper[5110]: I0122 14:19:15.851396 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 22 14:19:15 crc kubenswrapper[5110]: I0122 14:19:15.874805 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 22 14:19:15 crc kubenswrapper[5110]: I0122 14:19:15.887072 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 22 14:19:15 crc kubenswrapper[5110]: I0122 14:19:15.897040 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 22 14:19:16 crc kubenswrapper[5110]: I0122 14:19:16.073426 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 22 14:19:16 crc kubenswrapper[5110]: I0122 14:19:16.216988 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 22 14:19:16 crc kubenswrapper[5110]: I0122 14:19:16.438220 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 22 14:19:16 crc kubenswrapper[5110]: I0122 14:19:16.464924 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 22 14:19:16 crc kubenswrapper[5110]: I0122 14:19:16.494011 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 22 14:19:16 crc kubenswrapper[5110]: I0122 14:19:16.544845 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:16 crc kubenswrapper[5110]: I0122 14:19:16.597031 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 22 14:19:16 crc kubenswrapper[5110]: I0122 14:19:16.675850 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 22 14:19:16 crc kubenswrapper[5110]: I0122 14:19:16.747134 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:16 crc kubenswrapper[5110]: I0122 14:19:16.804863 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 22 14:19:16 crc kubenswrapper[5110]: I0122 14:19:16.879718 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 22 14:19:16 crc kubenswrapper[5110]: I0122 14:19:16.880337 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 22 14:19:17 crc kubenswrapper[5110]: I0122 14:19:17.199467 5110 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 14:19:17 crc kubenswrapper[5110]: I0122 14:19:17.199544 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 14:19:17 crc kubenswrapper[5110]: I0122 14:19:17.283941 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 22 14:19:17 crc kubenswrapper[5110]: I0122 14:19:17.297502 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 22 14:19:17 crc kubenswrapper[5110]: I0122 14:19:17.646273 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 22 14:19:17 crc kubenswrapper[5110]: I0122 14:19:17.718352 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 22 14:19:17 crc kubenswrapper[5110]: I0122 14:19:17.764321 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 22 14:19:17 crc kubenswrapper[5110]: I0122 14:19:17.892718 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.264980 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.323381 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.376207 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.382794 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.395218 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.483900 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.544951 5110 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.585534 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.596922 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.726241 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.808142 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.823958 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 22 14:19:18 crc kubenswrapper[5110]: I0122 14:19:18.905546 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.074449 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.115406 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.122196 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.145072 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.156963 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.230651 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.381201 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.434969 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.439387 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.514267 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.543531 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.588732 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.649940 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.691656 5110 patch_prober.go:28] interesting pod/machine-config-daemon-grf5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.691738 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podUID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.828690 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.894856 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 22 14:19:19 crc kubenswrapper[5110]: I0122 14:19:19.911040 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.011739 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.137435 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.140476 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.147067 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.169409 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.180848 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.200825 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.303362 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.314711 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.322140 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.385272 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.397952 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.409197 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.468648 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.540228 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.552535 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.669119 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.683724 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.693327 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.911504 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.945755 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 22 14:19:20 crc kubenswrapper[5110]: I0122 14:19:20.986141 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.025080 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.059542 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.231563 5110 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.232202 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.23218796 podStartE2EDuration="38.23218796s" podCreationTimestamp="2026-01-22 14:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:19:02.896129459 +0000 UTC m=+223.118213838" watchObservedRunningTime="2026-01-22 14:19:21.23218796 +0000 UTC m=+241.454272329" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.237144 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-66458b6674-pgzmf"] Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.237209 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.245196 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.259451 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.259431419 podStartE2EDuration="19.259431419s" podCreationTimestamp="2026-01-22 14:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:19:21.258113534 +0000 UTC m=+241.480197913" watchObservedRunningTime="2026-01-22 14:19:21.259431419 +0000 UTC m=+241.481515778" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.281446 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.304353 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.324759 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.385820 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.390991 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.412040 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.520573 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.566910 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.650935 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.695457 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.725699 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.888489 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.912459 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 22 14:19:21 crc kubenswrapper[5110]: I0122 14:19:21.936423 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.015136 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.147813 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.176187 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.286919 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa774b1c-ef48-4bd8-a4df-d3b963e547e6" path="/var/lib/kubelet/pods/aa774b1c-ef48-4bd8-a4df-d3b963e547e6/volumes" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.295868 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.305151 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.311306 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.378337 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.478652 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.511327 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.540703 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.584866 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.585094 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.610143 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.702702 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.848833 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.866544 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 22 14:19:22 crc kubenswrapper[5110]: I0122 14:19:22.909586 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.005543 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.048649 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.128268 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.146264 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.185835 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.366521 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.426674 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.431746 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.449024 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.452303 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.473329 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.499905 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.559031 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.572901 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.632555 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.669010 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.670431 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.676987 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.712229 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.833796 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 22 14:19:23 crc kubenswrapper[5110]: I0122 14:19:23.873011 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.024104 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.039005 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.069443 5110 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.070576 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.170408 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.180817 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.223420 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.235485 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.275599 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.296956 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.369164 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.422882 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.502838 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.520769 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.549446 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.621869 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.626905 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.730067 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.835910 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.837570 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-694667f55-nhlw4"] Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.838164 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" containerName="installer" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.838179 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" containerName="installer" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.838195 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa774b1c-ef48-4bd8-a4df-d3b963e547e6" containerName="oauth-openshift" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.838201 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa774b1c-ef48-4bd8-a4df-d3b963e547e6" containerName="oauth-openshift" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.838298 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="aa774b1c-ef48-4bd8-a4df-d3b963e547e6" containerName="oauth-openshift" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.838314 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="c7b0c5f7-6807-40b0-8295-0bac4129a62e" containerName="installer" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.853075 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.856308 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.856688 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.856923 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.857223 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.857297 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.858163 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.858283 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.858516 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.858561 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.858640 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.858663 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.858776 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.858846 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.865114 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.877936 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.886195 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.904908 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.919416 5110 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.988544 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 22 14:19:24 crc kubenswrapper[5110]: I0122 14:19:24.988899 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.023907 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.024222 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-session\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.024890 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/329d8e2c-a053-4b58-acac-4758df02a3e8-audit-dir\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.024996 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.025105 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.025204 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/329d8e2c-a053-4b58-acac-4758df02a3e8-audit-policies\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.025321 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-user-template-login\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.025414 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxbtk\" (UniqueName: \"kubernetes.io/projected/329d8e2c-a053-4b58-acac-4758df02a3e8-kube-api-access-gxbtk\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.025508 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.025599 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.025721 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.025810 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.025903 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-user-template-error\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.026025 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.073091 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.089657 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.127486 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.127818 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/329d8e2c-a053-4b58-acac-4758df02a3e8-audit-policies\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.127981 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-user-template-login\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.128119 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gxbtk\" (UniqueName: \"kubernetes.io/projected/329d8e2c-a053-4b58-acac-4758df02a3e8-kube-api-access-gxbtk\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.128236 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.128343 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.128455 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.128558 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.128678 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/329d8e2c-a053-4b58-acac-4758df02a3e8-audit-policies\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.128685 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-user-template-error\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.128516 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.129291 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.129341 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.129366 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-session\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.129410 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/329d8e2c-a053-4b58-acac-4758df02a3e8-audit-dir\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.129441 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.130147 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.131149 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/329d8e2c-a053-4b58-acac-4758df02a3e8-audit-dir\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.131732 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.135322 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-user-template-error\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.135668 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-user-template-login\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.137554 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.137844 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.138402 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.139018 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.139383 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-system-session\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.139981 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/329d8e2c-a053-4b58-acac-4758df02a3e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.153319 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxbtk\" (UniqueName: \"kubernetes.io/projected/329d8e2c-a053-4b58-acac-4758df02a3e8-kube-api-access-gxbtk\") pod \"oauth-openshift-694667f55-nhlw4\" (UID: \"329d8e2c-a053-4b58-acac-4758df02a3e8\") " pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.172836 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.220811 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.221047 5110 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.221358 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://8bc34df2e378d09e54395cc75c857692fceff35e64815393130f8a51b227429f" gracePeriod=5 Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.280685 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.308697 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.339888 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.465390 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.514090 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.578450 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-694667f55-nhlw4"] Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.631047 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.706709 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.717845 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.836543 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-694667f55-nhlw4"] Jan 22 14:19:25 crc kubenswrapper[5110]: I0122 14:19:25.837384 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.015192 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.043379 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" event={"ID":"329d8e2c-a053-4b58-acac-4758df02a3e8","Type":"ContainerStarted","Data":"0c060a607adf4069c834a1e0fd631737a92e6ae9c0084a63118dad7f2db82a18"} Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.080196 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.139317 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.204197 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.277134 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.277749 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.334273 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.344254 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.509152 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.517008 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.545986 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.565668 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.577028 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.745571 5110 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.867387 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.881913 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 14:19:26 crc kubenswrapper[5110]: I0122 14:19:26.887551 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.052262 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-694667f55-nhlw4_329d8e2c-a053-4b58-acac-4758df02a3e8/oauth-openshift/0.log" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.052322 5110 generic.go:358] "Generic (PLEG): container finished" podID="329d8e2c-a053-4b58-acac-4758df02a3e8" containerID="e7ad7d29ed4a0aee97d74614becdfa2884e8d9a9352db32644408865fabdc700" exitCode=255 Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.052465 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" event={"ID":"329d8e2c-a053-4b58-acac-4758df02a3e8","Type":"ContainerDied","Data":"e7ad7d29ed4a0aee97d74614becdfa2884e8d9a9352db32644408865fabdc700"} Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.052918 5110 scope.go:117] "RemoveContainer" containerID="e7ad7d29ed4a0aee97d74614becdfa2884e8d9a9352db32644408865fabdc700" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.110856 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.156538 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.199860 5110 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.199928 5110 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.199982 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.200633 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"729c7e63c92a71ca30e9438f1c8ab959275423b9b97d92aa5e1b0663455da83f"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.200728 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://729c7e63c92a71ca30e9438f1c8ab959275423b9b97d92aa5e1b0663455da83f" gracePeriod=30 Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.207398 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.236491 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.523215 5110 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.594006 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.611043 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.653522 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.754569 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.813926 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.877429 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.966834 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.991631 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 22 14:19:27 crc kubenswrapper[5110]: I0122 14:19:27.997594 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.060158 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-694667f55-nhlw4_329d8e2c-a053-4b58-acac-4758df02a3e8/oauth-openshift/1.log" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.060773 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-694667f55-nhlw4_329d8e2c-a053-4b58-acac-4758df02a3e8/oauth-openshift/0.log" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.060815 5110 generic.go:358] "Generic (PLEG): container finished" podID="329d8e2c-a053-4b58-acac-4758df02a3e8" containerID="6d44ef234c8ddd28d26559db196b9956142d484be3421055840882a0b94d4892" exitCode=255 Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.060944 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" event={"ID":"329d8e2c-a053-4b58-acac-4758df02a3e8","Type":"ContainerDied","Data":"6d44ef234c8ddd28d26559db196b9956142d484be3421055840882a0b94d4892"} Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.060993 5110 scope.go:117] "RemoveContainer" containerID="e7ad7d29ed4a0aee97d74614becdfa2884e8d9a9352db32644408865fabdc700" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.061930 5110 scope.go:117] "RemoveContainer" containerID="6d44ef234c8ddd28d26559db196b9956142d484be3421055840882a0b94d4892" Jan 22 14:19:28 crc kubenswrapper[5110]: E0122 14:19:28.062739 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-694667f55-nhlw4_openshift-authentication(329d8e2c-a053-4b58-acac-4758df02a3e8)\"" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" podUID="329d8e2c-a053-4b58-acac-4758df02a3e8" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.085942 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.168406 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.265894 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.279473 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.302553 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.344036 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.608439 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.648490 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.656126 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.690603 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.721109 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.830082 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 22 14:19:28 crc kubenswrapper[5110]: I0122 14:19:28.898454 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 22 14:19:29 crc kubenswrapper[5110]: I0122 14:19:29.037086 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:29 crc kubenswrapper[5110]: I0122 14:19:29.069215 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-694667f55-nhlw4_329d8e2c-a053-4b58-acac-4758df02a3e8/oauth-openshift/1.log" Jan 22 14:19:29 crc kubenswrapper[5110]: I0122 14:19:29.070555 5110 scope.go:117] "RemoveContainer" containerID="6d44ef234c8ddd28d26559db196b9956142d484be3421055840882a0b94d4892" Jan 22 14:19:29 crc kubenswrapper[5110]: E0122 14:19:29.070969 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-694667f55-nhlw4_openshift-authentication(329d8e2c-a053-4b58-acac-4758df02a3e8)\"" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" podUID="329d8e2c-a053-4b58-acac-4758df02a3e8" Jan 22 14:19:29 crc kubenswrapper[5110]: I0122 14:19:29.183069 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 22 14:19:29 crc kubenswrapper[5110]: I0122 14:19:29.347155 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 22 14:19:29 crc kubenswrapper[5110]: I0122 14:19:29.470146 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 22 14:19:29 crc kubenswrapper[5110]: I0122 14:19:29.885080 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 22 14:19:29 crc kubenswrapper[5110]: I0122 14:19:29.892342 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.114973 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.132983 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.179277 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.422795 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.475328 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.801816 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.801899 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.807213 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.807317 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.807339 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.807399 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.807414 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.807490 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.807564 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.807665 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.807432 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.808039 5110 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.808072 5110 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.808093 5110 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.808106 5110 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.816996 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 14:19:30 crc kubenswrapper[5110]: I0122 14:19:30.909305 5110 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 14:19:31 crc kubenswrapper[5110]: I0122 14:19:31.086974 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 22 14:19:31 crc kubenswrapper[5110]: I0122 14:19:31.087017 5110 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="8bc34df2e378d09e54395cc75c857692fceff35e64815393130f8a51b227429f" exitCode=137 Jan 22 14:19:31 crc kubenswrapper[5110]: I0122 14:19:31.087136 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 14:19:31 crc kubenswrapper[5110]: I0122 14:19:31.087175 5110 scope.go:117] "RemoveContainer" containerID="8bc34df2e378d09e54395cc75c857692fceff35e64815393130f8a51b227429f" Jan 22 14:19:31 crc kubenswrapper[5110]: I0122 14:19:31.110136 5110 scope.go:117] "RemoveContainer" containerID="8bc34df2e378d09e54395cc75c857692fceff35e64815393130f8a51b227429f" Jan 22 14:19:31 crc kubenswrapper[5110]: E0122 14:19:31.110708 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc34df2e378d09e54395cc75c857692fceff35e64815393130f8a51b227429f\": container with ID starting with 8bc34df2e378d09e54395cc75c857692fceff35e64815393130f8a51b227429f not found: ID does not exist" containerID="8bc34df2e378d09e54395cc75c857692fceff35e64815393130f8a51b227429f" Jan 22 14:19:31 crc kubenswrapper[5110]: I0122 14:19:31.110804 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc34df2e378d09e54395cc75c857692fceff35e64815393130f8a51b227429f"} err="failed to get container status \"8bc34df2e378d09e54395cc75c857692fceff35e64815393130f8a51b227429f\": rpc error: code = NotFound desc = could not find container \"8bc34df2e378d09e54395cc75c857692fceff35e64815393130f8a51b227429f\": container with ID starting with 8bc34df2e378d09e54395cc75c857692fceff35e64815393130f8a51b227429f not found: ID does not exist" Jan 22 14:19:31 crc kubenswrapper[5110]: I0122 14:19:31.280864 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 22 14:19:31 crc kubenswrapper[5110]: I0122 14:19:31.482463 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 22 14:19:31 crc kubenswrapper[5110]: I0122 14:19:31.971198 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 22 14:19:32 crc kubenswrapper[5110]: I0122 14:19:32.279872 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 22 14:19:32 crc kubenswrapper[5110]: I0122 14:19:32.280349 5110 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 22 14:19:32 crc kubenswrapper[5110]: I0122 14:19:32.289803 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 14:19:32 crc kubenswrapper[5110]: I0122 14:19:32.289850 5110 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6ee4e992-6eec-446e-a8dd-6624f4ba789e" Jan 22 14:19:32 crc kubenswrapper[5110]: I0122 14:19:32.294171 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 14:19:32 crc kubenswrapper[5110]: I0122 14:19:32.294212 5110 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6ee4e992-6eec-446e-a8dd-6624f4ba789e" Jan 22 14:19:32 crc kubenswrapper[5110]: I0122 14:19:32.935201 5110 ???:1] "http: TLS handshake error from 192.168.126.11:55200: no serving certificate available for the kubelet" Jan 22 14:19:35 crc kubenswrapper[5110]: I0122 14:19:35.173607 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:35 crc kubenswrapper[5110]: I0122 14:19:35.174040 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:35 crc kubenswrapper[5110]: I0122 14:19:35.175158 5110 scope.go:117] "RemoveContainer" containerID="6d44ef234c8ddd28d26559db196b9956142d484be3421055840882a0b94d4892" Jan 22 14:19:35 crc kubenswrapper[5110]: E0122 14:19:35.175691 5110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-694667f55-nhlw4_openshift-authentication(329d8e2c-a053-4b58-acac-4758df02a3e8)\"" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" podUID="329d8e2c-a053-4b58-acac-4758df02a3e8" Jan 22 14:19:45 crc kubenswrapper[5110]: I0122 14:19:45.175166 5110 generic.go:358] "Generic (PLEG): container finished" podID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" containerID="9531453046d5270ca61029f306a797316355364b48f372a98cccf355e8005f9e" exitCode=0 Jan 22 14:19:45 crc kubenswrapper[5110]: I0122 14:19:45.175359 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" event={"ID":"3adc63ca-ac54-461a-9a91-10ba0b85fa2b","Type":"ContainerDied","Data":"9531453046d5270ca61029f306a797316355364b48f372a98cccf355e8005f9e"} Jan 22 14:19:45 crc kubenswrapper[5110]: I0122 14:19:45.176339 5110 scope.go:117] "RemoveContainer" containerID="9531453046d5270ca61029f306a797316355364b48f372a98cccf355e8005f9e" Jan 22 14:19:46 crc kubenswrapper[5110]: I0122 14:19:46.186491 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" event={"ID":"3adc63ca-ac54-461a-9a91-10ba0b85fa2b","Type":"ContainerStarted","Data":"3dc86dc50f5848a37e24e1fa27b22f8abc3b2972b4837206e372864b64b8e0a2"} Jan 22 14:19:46 crc kubenswrapper[5110]: I0122 14:19:46.189037 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:19:46 crc kubenswrapper[5110]: I0122 14:19:46.191676 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:19:47 crc kubenswrapper[5110]: I0122 14:19:47.274213 5110 scope.go:117] "RemoveContainer" containerID="6d44ef234c8ddd28d26559db196b9956142d484be3421055840882a0b94d4892" Jan 22 14:19:48 crc kubenswrapper[5110]: I0122 14:19:48.201374 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-694667f55-nhlw4_329d8e2c-a053-4b58-acac-4758df02a3e8/oauth-openshift/1.log" Jan 22 14:19:48 crc kubenswrapper[5110]: I0122 14:19:48.201812 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" event={"ID":"329d8e2c-a053-4b58-acac-4758df02a3e8","Type":"ContainerStarted","Data":"e909c11cf7ee9c51404850311862a16f4925102bd39fb729b898c8ae70e1ac3b"} Jan 22 14:19:48 crc kubenswrapper[5110]: I0122 14:19:48.202375 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:48 crc kubenswrapper[5110]: I0122 14:19:48.224484 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" podStartSLOduration=73.224471061 podStartE2EDuration="1m13.224471061s" podCreationTimestamp="2026-01-22 14:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:19:48.221817491 +0000 UTC m=+268.443901860" watchObservedRunningTime="2026-01-22 14:19:48.224471061 +0000 UTC m=+268.446555420" Jan 22 14:19:48 crc kubenswrapper[5110]: I0122 14:19:48.462721 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-694667f55-nhlw4" Jan 22 14:19:49 crc kubenswrapper[5110]: I0122 14:19:49.691665 5110 patch_prober.go:28] interesting pod/machine-config-daemon-grf5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:19:49 crc kubenswrapper[5110]: I0122 14:19:49.691739 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podUID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:19:49 crc kubenswrapper[5110]: I0122 14:19:49.691791 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:19:49 crc kubenswrapper[5110]: I0122 14:19:49.692459 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ecaf8dd09571a6f4b5924d1e4734e6a3c16b4eff3df4258d7238252038bcd11"} pod="openshift-machine-config-operator/machine-config-daemon-grf5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 14:19:49 crc kubenswrapper[5110]: I0122 14:19:49.692553 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podUID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerName="machine-config-daemon" containerID="cri-o://6ecaf8dd09571a6f4b5924d1e4734e6a3c16b4eff3df4258d7238252038bcd11" gracePeriod=600 Jan 22 14:19:50 crc kubenswrapper[5110]: I0122 14:19:50.216043 5110 generic.go:358] "Generic (PLEG): container finished" podID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerID="6ecaf8dd09571a6f4b5924d1e4734e6a3c16b4eff3df4258d7238252038bcd11" exitCode=0 Jan 22 14:19:50 crc kubenswrapper[5110]: I0122 14:19:50.216134 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" event={"ID":"6bfecfa4-ce38-4a92-a3dc-588176267b96","Type":"ContainerDied","Data":"6ecaf8dd09571a6f4b5924d1e4734e6a3c16b4eff3df4258d7238252038bcd11"} Jan 22 14:19:50 crc kubenswrapper[5110]: I0122 14:19:50.216905 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" event={"ID":"6bfecfa4-ce38-4a92-a3dc-588176267b96","Type":"ContainerStarted","Data":"7d6656aaa510b1e729222f6ed0ef8ff67bc1783a0b2496049ceeafc8a653fe2b"} Jan 22 14:19:55 crc kubenswrapper[5110]: I0122 14:19:55.409685 5110 ???:1] "http: TLS handshake error from 192.168.126.11:33136: no serving certificate available for the kubelet" Jan 22 14:19:58 crc kubenswrapper[5110]: I0122 14:19:58.262570 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 22 14:19:58 crc kubenswrapper[5110]: I0122 14:19:58.264711 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 14:19:58 crc kubenswrapper[5110]: I0122 14:19:58.264755 5110 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="729c7e63c92a71ca30e9438f1c8ab959275423b9b97d92aa5e1b0663455da83f" exitCode=137 Jan 22 14:19:58 crc kubenswrapper[5110]: I0122 14:19:58.264902 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"729c7e63c92a71ca30e9438f1c8ab959275423b9b97d92aa5e1b0663455da83f"} Jan 22 14:19:58 crc kubenswrapper[5110]: I0122 14:19:58.264964 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"1018540f822544d6615d6c220543aea0ceeb6550419cd34ab88ba117d1359ce4"} Jan 22 14:19:58 crc kubenswrapper[5110]: I0122 14:19:58.264987 5110 scope.go:117] "RemoveContainer" containerID="4d58e05af32372135da721ce73766f6ea10f366c5c1b13a318803ff65725c882" Jan 22 14:19:59 crc kubenswrapper[5110]: I0122 14:19:59.274956 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 22 14:20:00 crc kubenswrapper[5110]: I0122 14:20:00.294895 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:20:07 crc kubenswrapper[5110]: I0122 14:20:07.199761 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:20:07 crc kubenswrapper[5110]: I0122 14:20:07.204746 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:20:07 crc kubenswrapper[5110]: I0122 14:20:07.367122 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.550718 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-qq9vl"] Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.551809 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" podUID="7f92c314-2d0f-42f1-97d2-3914c2f2a73c" containerName="controller-manager" containerID="cri-o://e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8" gracePeriod=30 Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.553344 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl"] Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.553691 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" podUID="9f7f31bb-3b08-490d-8e92-09bb8ce46b18" containerName="route-controller-manager" containerID="cri-o://aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2" gracePeriod=30 Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.877823 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.910330 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2"] Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.911155 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f7f31bb-3b08-490d-8e92-09bb8ce46b18" containerName="route-controller-manager" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.911181 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7f31bb-3b08-490d-8e92-09bb8ce46b18" containerName="route-controller-manager" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.911231 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.911242 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.911450 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.911503 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="9f7f31bb-3b08-490d-8e92-09bb8ce46b18" containerName="route-controller-manager" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.920211 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.921800 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2"] Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.921937 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.946147 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-tmp\") pod \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.946221 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-config\") pod \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.946320 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-serving-cert\") pod \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.946358 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-client-ca\") pod \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.946906 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-tmp" (OuterVolumeSpecName: "tmp") pod "9f7f31bb-3b08-490d-8e92-09bb8ce46b18" (UID: "9f7f31bb-3b08-490d-8e92-09bb8ce46b18"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.946977 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lmz8\" (UniqueName: \"kubernetes.io/projected/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-kube-api-access-9lmz8\") pod \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\" (UID: \"9f7f31bb-3b08-490d-8e92-09bb8ce46b18\") " Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.947254 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-client-ca" (OuterVolumeSpecName: "client-ca") pod "9f7f31bb-3b08-490d-8e92-09bb8ce46b18" (UID: "9f7f31bb-3b08-490d-8e92-09bb8ce46b18"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.947300 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-config" (OuterVolumeSpecName: "config") pod "9f7f31bb-3b08-490d-8e92-09bb8ce46b18" (UID: "9f7f31bb-3b08-490d-8e92-09bb8ce46b18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.947870 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.947888 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.947897 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.950716 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd"] Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.951851 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f92c314-2d0f-42f1-97d2-3914c2f2a73c" containerName="controller-manager" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.951870 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f92c314-2d0f-42f1-97d2-3914c2f2a73c" containerName="controller-manager" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.951973 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f92c314-2d0f-42f1-97d2-3914c2f2a73c" containerName="controller-manager" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.953067 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-kube-api-access-9lmz8" (OuterVolumeSpecName: "kube-api-access-9lmz8") pod "9f7f31bb-3b08-490d-8e92-09bb8ce46b18" (UID: "9f7f31bb-3b08-490d-8e92-09bb8ce46b18"). InnerVolumeSpecName "kube-api-access-9lmz8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.958970 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.968582 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd"] Jan 22 14:20:18 crc kubenswrapper[5110]: I0122 14:20:18.971013 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9f7f31bb-3b08-490d-8e92-09bb8ce46b18" (UID: "9f7f31bb-3b08-490d-8e92-09bb8ce46b18"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.048555 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-tmp\") pod \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.048617 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-config\") pod \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.048661 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-client-ca\") pod \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.048797 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s8sd\" (UniqueName: \"kubernetes.io/projected/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-kube-api-access-7s8sd\") pod \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.048839 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-proxy-ca-bundles\") pod \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.048858 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-serving-cert\") pod \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\" (UID: \"7f92c314-2d0f-42f1-97d2-3914c2f2a73c\") " Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.049036 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-tmp" (OuterVolumeSpecName: "tmp") pod "7f92c314-2d0f-42f1-97d2-3914c2f2a73c" (UID: "7f92c314-2d0f-42f1-97d2-3914c2f2a73c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.049433 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-config\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.049479 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f9bbbb8-5106-45dc-ac4b-89f80a345518-serving-cert\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.049515 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-proxy-ca-bundles\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.049534 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d156614-4cec-4d35-b8a2-733af21d9b61-tmp\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.049555 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pphq\" (UniqueName: \"kubernetes.io/projected/7d156614-4cec-4d35-b8a2-733af21d9b61-kube-api-access-6pphq\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.049571 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fb8q\" (UniqueName: \"kubernetes.io/projected/3f9bbbb8-5106-45dc-ac4b-89f80a345518-kube-api-access-8fb8q\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.049575 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7f92c314-2d0f-42f1-97d2-3914c2f2a73c" (UID: "7f92c314-2d0f-42f1-97d2-3914c2f2a73c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.049707 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-client-ca\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.049752 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3f9bbbb8-5106-45dc-ac4b-89f80a345518-tmp\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.049803 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f92c314-2d0f-42f1-97d2-3914c2f2a73c" (UID: "7f92c314-2d0f-42f1-97d2-3914c2f2a73c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.049841 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f9bbbb8-5106-45dc-ac4b-89f80a345518-client-ca\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.050156 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-config" (OuterVolumeSpecName: "config") pod "7f92c314-2d0f-42f1-97d2-3914c2f2a73c" (UID: "7f92c314-2d0f-42f1-97d2-3914c2f2a73c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.050591 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d156614-4cec-4d35-b8a2-733af21d9b61-serving-cert\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.050702 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f9bbbb8-5106-45dc-ac4b-89f80a345518-config\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.050874 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.050897 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.050908 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9lmz8\" (UniqueName: \"kubernetes.io/projected/9f7f31bb-3b08-490d-8e92-09bb8ce46b18-kube-api-access-9lmz8\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.050919 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.050928 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.050936 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.052147 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f92c314-2d0f-42f1-97d2-3914c2f2a73c" (UID: "7f92c314-2d0f-42f1-97d2-3914c2f2a73c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.053344 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-kube-api-access-7s8sd" (OuterVolumeSpecName: "kube-api-access-7s8sd") pod "7f92c314-2d0f-42f1-97d2-3914c2f2a73c" (UID: "7f92c314-2d0f-42f1-97d2-3914c2f2a73c"). InnerVolumeSpecName "kube-api-access-7s8sd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.151928 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-client-ca\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.152063 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3f9bbbb8-5106-45dc-ac4b-89f80a345518-tmp\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.152113 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f9bbbb8-5106-45dc-ac4b-89f80a345518-client-ca\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.152163 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d156614-4cec-4d35-b8a2-733af21d9b61-serving-cert\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.152814 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3f9bbbb8-5106-45dc-ac4b-89f80a345518-tmp\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.152886 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f9bbbb8-5106-45dc-ac4b-89f80a345518-config\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.153013 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-config\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.153067 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f9bbbb8-5106-45dc-ac4b-89f80a345518-serving-cert\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.153146 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-proxy-ca-bundles\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.153179 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d156614-4cec-4d35-b8a2-733af21d9b61-tmp\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.153218 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6pphq\" (UniqueName: \"kubernetes.io/projected/7d156614-4cec-4d35-b8a2-733af21d9b61-kube-api-access-6pphq\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.153243 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8fb8q\" (UniqueName: \"kubernetes.io/projected/3f9bbbb8-5106-45dc-ac4b-89f80a345518-kube-api-access-8fb8q\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.153313 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7s8sd\" (UniqueName: \"kubernetes.io/projected/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-kube-api-access-7s8sd\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.153328 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f92c314-2d0f-42f1-97d2-3914c2f2a73c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.153974 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f9bbbb8-5106-45dc-ac4b-89f80a345518-client-ca\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.154209 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-client-ca\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.154748 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f9bbbb8-5106-45dc-ac4b-89f80a345518-config\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.155098 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d156614-4cec-4d35-b8a2-733af21d9b61-tmp\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.155284 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-proxy-ca-bundles\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.155848 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-config\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.159992 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f9bbbb8-5106-45dc-ac4b-89f80a345518-serving-cert\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.161530 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d156614-4cec-4d35-b8a2-733af21d9b61-serving-cert\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.170585 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fb8q\" (UniqueName: \"kubernetes.io/projected/3f9bbbb8-5106-45dc-ac4b-89f80a345518-kube-api-access-8fb8q\") pod \"route-controller-manager-65989d4b98-2pvn2\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.175184 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pphq\" (UniqueName: \"kubernetes.io/projected/7d156614-4cec-4d35-b8a2-733af21d9b61-kube-api-access-6pphq\") pod \"controller-manager-69dc94bbc8-t8zcd\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.236991 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.281650 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.438311 5110 generic.go:358] "Generic (PLEG): container finished" podID="7f92c314-2d0f-42f1-97d2-3914c2f2a73c" containerID="e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8" exitCode=0 Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.438464 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" event={"ID":"7f92c314-2d0f-42f1-97d2-3914c2f2a73c","Type":"ContainerDied","Data":"e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8"} Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.438850 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" event={"ID":"7f92c314-2d0f-42f1-97d2-3914c2f2a73c","Type":"ContainerDied","Data":"4b4ffd734f44577eef14666bbec1c50d0fecfc93ddbf7f770891e97237a7dfdd"} Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.438872 5110 scope.go:117] "RemoveContainer" containerID="e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.438543 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-qq9vl" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.443154 5110 generic.go:358] "Generic (PLEG): container finished" podID="9f7f31bb-3b08-490d-8e92-09bb8ce46b18" containerID="aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2" exitCode=0 Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.443270 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" event={"ID":"9f7f31bb-3b08-490d-8e92-09bb8ce46b18","Type":"ContainerDied","Data":"aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2"} Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.443297 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" event={"ID":"9f7f31bb-3b08-490d-8e92-09bb8ce46b18","Type":"ContainerDied","Data":"4e095ecf1f184bf3bfba5e875956d0761bbca73afb7de4fb5edc5c997e0e306d"} Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.443380 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.461371 5110 scope.go:117] "RemoveContainer" containerID="e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8" Jan 22 14:20:19 crc kubenswrapper[5110]: E0122 14:20:19.462046 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8\": container with ID starting with e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8 not found: ID does not exist" containerID="e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.462086 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8"} err="failed to get container status \"e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8\": rpc error: code = NotFound desc = could not find container \"e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8\": container with ID starting with e8697df7457e9827da840fec8d0661c8daf779f3b30a49b111baaf69c300a4f8 not found: ID does not exist" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.462111 5110 scope.go:117] "RemoveContainer" containerID="aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.469552 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2"] Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.476998 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-qq9vl"] Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.480035 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-qq9vl"] Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.491828 5110 scope.go:117] "RemoveContainer" containerID="aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2" Jan 22 14:20:19 crc kubenswrapper[5110]: E0122 14:20:19.492299 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2\": container with ID starting with aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2 not found: ID does not exist" containerID="aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.492356 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2"} err="failed to get container status \"aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2\": rpc error: code = NotFound desc = could not find container \"aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2\": container with ID starting with aea935caf33721a89fa644f7d9432ef5be32005c04ee15a79d7bfb5a86591cd2 not found: ID does not exist" Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.494543 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl"] Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.514896 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w9fgl"] Jan 22 14:20:19 crc kubenswrapper[5110]: I0122 14:20:19.532817 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd"] Jan 22 14:20:19 crc kubenswrapper[5110]: W0122 14:20:19.536478 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d156614_4cec_4d35_b8a2_733af21d9b61.slice/crio-f85db6dc154667d5bd3de692cd57704411324615744603a9986495255c9cf316 WatchSource:0}: Error finding container f85db6dc154667d5bd3de692cd57704411324615744603a9986495255c9cf316: Status 404 returned error can't find the container with id f85db6dc154667d5bd3de692cd57704411324615744603a9986495255c9cf316 Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.289526 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f92c314-2d0f-42f1-97d2-3914c2f2a73c" path="/var/lib/kubelet/pods/7f92c314-2d0f-42f1-97d2-3914c2f2a73c/volumes" Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.290451 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7f31bb-3b08-490d-8e92-09bb8ce46b18" path="/var/lib/kubelet/pods/9f7f31bb-3b08-490d-8e92-09bb8ce46b18/volumes" Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.357316 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-694667f55-nhlw4_329d8e2c-a053-4b58-acac-4758df02a3e8/oauth-openshift/1.log" Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.357816 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-694667f55-nhlw4_329d8e2c-a053-4b58-acac-4758df02a3e8/oauth-openshift/1.log" Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.424738 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.426126 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.456063 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" event={"ID":"7d156614-4cec-4d35-b8a2-733af21d9b61","Type":"ContainerStarted","Data":"b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316"} Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.456109 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" event={"ID":"7d156614-4cec-4d35-b8a2-733af21d9b61","Type":"ContainerStarted","Data":"f85db6dc154667d5bd3de692cd57704411324615744603a9986495255c9cf316"} Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.460287 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" event={"ID":"3f9bbbb8-5106-45dc-ac4b-89f80a345518","Type":"ContainerStarted","Data":"f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527"} Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.460333 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" event={"ID":"3f9bbbb8-5106-45dc-ac4b-89f80a345518","Type":"ContainerStarted","Data":"adec06f6f95ce19f94681c4e68f7191e37e6ab3c63cabc4ebec73a5d2f8ac4d2"} Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.461239 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.465952 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.475057 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" podStartSLOduration=2.475039344 podStartE2EDuration="2.475039344s" podCreationTimestamp="2026-01-22 14:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:20:20.471997823 +0000 UTC m=+300.694082182" watchObservedRunningTime="2026-01-22 14:20:20.475039344 +0000 UTC m=+300.697123723" Jan 22 14:20:20 crc kubenswrapper[5110]: I0122 14:20:20.489009 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" podStartSLOduration=2.488995322 podStartE2EDuration="2.488995322s" podCreationTimestamp="2026-01-22 14:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:20:20.48775832 +0000 UTC m=+300.709842689" watchObservedRunningTime="2026-01-22 14:20:20.488995322 +0000 UTC m=+300.711079681" Jan 22 14:20:21 crc kubenswrapper[5110]: I0122 14:20:21.465881 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:21 crc kubenswrapper[5110]: I0122 14:20:21.470785 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:30 crc kubenswrapper[5110]: I0122 14:20:30.719936 5110 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 14:20:32 crc kubenswrapper[5110]: I0122 14:20:32.558481 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd"] Jan 22 14:20:32 crc kubenswrapper[5110]: I0122 14:20:32.558816 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" podUID="7d156614-4cec-4d35-b8a2-733af21d9b61" containerName="controller-manager" containerID="cri-o://b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316" gracePeriod=30 Jan 22 14:20:32 crc kubenswrapper[5110]: I0122 14:20:32.575244 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2"] Jan 22 14:20:32 crc kubenswrapper[5110]: I0122 14:20:32.575539 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" podUID="3f9bbbb8-5106-45dc-ac4b-89f80a345518" containerName="route-controller-manager" containerID="cri-o://f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527" gracePeriod=30 Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.056104 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.093296 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh"] Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.094032 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f9bbbb8-5106-45dc-ac4b-89f80a345518" containerName="route-controller-manager" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.094059 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9bbbb8-5106-45dc-ac4b-89f80a345518" containerName="route-controller-manager" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.094201 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f9bbbb8-5106-45dc-ac4b-89f80a345518" containerName="route-controller-manager" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.099735 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.125893 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh"] Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.131593 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09401ea1-d993-410a-9177-326d83efe29f-client-ca\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.131724 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09401ea1-d993-410a-9177-326d83efe29f-config\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.131756 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09401ea1-d993-410a-9177-326d83efe29f-tmp\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.131793 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09401ea1-d993-410a-9177-326d83efe29f-serving-cert\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.131936 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ns2r\" (UniqueName: \"kubernetes.io/projected/09401ea1-d993-410a-9177-326d83efe29f-kube-api-access-6ns2r\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.232561 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f9bbbb8-5106-45dc-ac4b-89f80a345518-client-ca\") pod \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.232746 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3f9bbbb8-5106-45dc-ac4b-89f80a345518-tmp\") pod \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.232865 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fb8q\" (UniqueName: \"kubernetes.io/projected/3f9bbbb8-5106-45dc-ac4b-89f80a345518-kube-api-access-8fb8q\") pod \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.232892 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f9bbbb8-5106-45dc-ac4b-89f80a345518-config\") pod \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.232955 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f9bbbb8-5106-45dc-ac4b-89f80a345518-serving-cert\") pod \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\" (UID: \"3f9bbbb8-5106-45dc-ac4b-89f80a345518\") " Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.233096 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6ns2r\" (UniqueName: \"kubernetes.io/projected/09401ea1-d993-410a-9177-326d83efe29f-kube-api-access-6ns2r\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.233131 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09401ea1-d993-410a-9177-326d83efe29f-client-ca\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.233224 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09401ea1-d993-410a-9177-326d83efe29f-config\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.233263 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09401ea1-d993-410a-9177-326d83efe29f-tmp\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.233288 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09401ea1-d993-410a-9177-326d83efe29f-serving-cert\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.234000 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f9bbbb8-5106-45dc-ac4b-89f80a345518-tmp" (OuterVolumeSpecName: "tmp") pod "3f9bbbb8-5106-45dc-ac4b-89f80a345518" (UID: "3f9bbbb8-5106-45dc-ac4b-89f80a345518"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.234575 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09401ea1-d993-410a-9177-326d83efe29f-tmp\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.234585 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9bbbb8-5106-45dc-ac4b-89f80a345518-client-ca" (OuterVolumeSpecName: "client-ca") pod "3f9bbbb8-5106-45dc-ac4b-89f80a345518" (UID: "3f9bbbb8-5106-45dc-ac4b-89f80a345518"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.234696 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09401ea1-d993-410a-9177-326d83efe29f-client-ca\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.235218 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09401ea1-d993-410a-9177-326d83efe29f-config\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.235264 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9bbbb8-5106-45dc-ac4b-89f80a345518-config" (OuterVolumeSpecName: "config") pod "3f9bbbb8-5106-45dc-ac4b-89f80a345518" (UID: "3f9bbbb8-5106-45dc-ac4b-89f80a345518"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.239820 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09401ea1-d993-410a-9177-326d83efe29f-serving-cert\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.242829 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9bbbb8-5106-45dc-ac4b-89f80a345518-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3f9bbbb8-5106-45dc-ac4b-89f80a345518" (UID: "3f9bbbb8-5106-45dc-ac4b-89f80a345518"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.243225 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9bbbb8-5106-45dc-ac4b-89f80a345518-kube-api-access-8fb8q" (OuterVolumeSpecName: "kube-api-access-8fb8q") pod "3f9bbbb8-5106-45dc-ac4b-89f80a345518" (UID: "3f9bbbb8-5106-45dc-ac4b-89f80a345518"). InnerVolumeSpecName "kube-api-access-8fb8q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.253994 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ns2r\" (UniqueName: \"kubernetes.io/projected/09401ea1-d993-410a-9177-326d83efe29f-kube-api-access-6ns2r\") pod \"route-controller-manager-58c57c57c4-ljgqh\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.285700 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.316880 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59b6f7d894-jtb6k"] Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.318226 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d156614-4cec-4d35-b8a2-733af21d9b61" containerName="controller-manager" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.318259 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d156614-4cec-4d35-b8a2-733af21d9b61" containerName="controller-manager" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.318425 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d156614-4cec-4d35-b8a2-733af21d9b61" containerName="controller-manager" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.325121 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59b6f7d894-jtb6k"] Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.325313 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.333762 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d156614-4cec-4d35-b8a2-733af21d9b61-serving-cert\") pod \"7d156614-4cec-4d35-b8a2-733af21d9b61\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.333815 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-proxy-ca-bundles\") pod \"7d156614-4cec-4d35-b8a2-733af21d9b61\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.333847 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-client-ca\") pod \"7d156614-4cec-4d35-b8a2-733af21d9b61\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.333910 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d156614-4cec-4d35-b8a2-733af21d9b61-tmp\") pod \"7d156614-4cec-4d35-b8a2-733af21d9b61\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.333954 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-config\") pod \"7d156614-4cec-4d35-b8a2-733af21d9b61\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.333979 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pphq\" (UniqueName: \"kubernetes.io/projected/7d156614-4cec-4d35-b8a2-733af21d9b61-kube-api-access-6pphq\") pod \"7d156614-4cec-4d35-b8a2-733af21d9b61\" (UID: \"7d156614-4cec-4d35-b8a2-733af21d9b61\") " Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334059 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-tmp\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334088 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpg48\" (UniqueName: \"kubernetes.io/projected/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-kube-api-access-xpg48\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334134 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-proxy-ca-bundles\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334183 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-serving-cert\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334205 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-client-ca\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334227 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-config\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334319 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3f9bbbb8-5106-45dc-ac4b-89f80a345518-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334329 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8fb8q\" (UniqueName: \"kubernetes.io/projected/3f9bbbb8-5106-45dc-ac4b-89f80a345518-kube-api-access-8fb8q\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334338 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f9bbbb8-5106-45dc-ac4b-89f80a345518-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334346 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f9bbbb8-5106-45dc-ac4b-89f80a345518-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334354 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f9bbbb8-5106-45dc-ac4b-89f80a345518-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334505 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d156614-4cec-4d35-b8a2-733af21d9b61" (UID: "7d156614-4cec-4d35-b8a2-733af21d9b61"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.334806 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d156614-4cec-4d35-b8a2-733af21d9b61" (UID: "7d156614-4cec-4d35-b8a2-733af21d9b61"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.335050 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d156614-4cec-4d35-b8a2-733af21d9b61-tmp" (OuterVolumeSpecName: "tmp") pod "7d156614-4cec-4d35-b8a2-733af21d9b61" (UID: "7d156614-4cec-4d35-b8a2-733af21d9b61"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.335786 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-config" (OuterVolumeSpecName: "config") pod "7d156614-4cec-4d35-b8a2-733af21d9b61" (UID: "7d156614-4cec-4d35-b8a2-733af21d9b61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.339681 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d156614-4cec-4d35-b8a2-733af21d9b61-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d156614-4cec-4d35-b8a2-733af21d9b61" (UID: "7d156614-4cec-4d35-b8a2-733af21d9b61"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.340792 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d156614-4cec-4d35-b8a2-733af21d9b61-kube-api-access-6pphq" (OuterVolumeSpecName: "kube-api-access-6pphq") pod "7d156614-4cec-4d35-b8a2-733af21d9b61" (UID: "7d156614-4cec-4d35-b8a2-733af21d9b61"). InnerVolumeSpecName "kube-api-access-6pphq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.420760 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.435207 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-serving-cert\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.435248 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-client-ca\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.435283 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-config\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.435362 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-tmp\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.435389 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xpg48\" (UniqueName: \"kubernetes.io/projected/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-kube-api-access-xpg48\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.435436 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-proxy-ca-bundles\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.435492 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.435508 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d156614-4cec-4d35-b8a2-733af21d9b61-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.435520 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.435531 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6pphq\" (UniqueName: \"kubernetes.io/projected/7d156614-4cec-4d35-b8a2-733af21d9b61-kube-api-access-6pphq\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.435542 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d156614-4cec-4d35-b8a2-733af21d9b61-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.435552 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d156614-4cec-4d35-b8a2-733af21d9b61-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.436317 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-client-ca\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.436592 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-proxy-ca-bundles\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.437387 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-config\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.441060 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-serving-cert\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.441751 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-tmp\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.453781 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpg48\" (UniqueName: \"kubernetes.io/projected/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-kube-api-access-xpg48\") pod \"controller-manager-59b6f7d894-jtb6k\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.550715 5110 generic.go:358] "Generic (PLEG): container finished" podID="3f9bbbb8-5106-45dc-ac4b-89f80a345518" containerID="f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527" exitCode=0 Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.550898 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" event={"ID":"3f9bbbb8-5106-45dc-ac4b-89f80a345518","Type":"ContainerDied","Data":"f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527"} Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.550928 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" event={"ID":"3f9bbbb8-5106-45dc-ac4b-89f80a345518","Type":"ContainerDied","Data":"adec06f6f95ce19f94681c4e68f7191e37e6ab3c63cabc4ebec73a5d2f8ac4d2"} Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.550953 5110 scope.go:117] "RemoveContainer" containerID="f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.551139 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.553638 5110 generic.go:358] "Generic (PLEG): container finished" podID="7d156614-4cec-4d35-b8a2-733af21d9b61" containerID="b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316" exitCode=0 Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.553799 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" event={"ID":"7d156614-4cec-4d35-b8a2-733af21d9b61","Type":"ContainerDied","Data":"b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316"} Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.553854 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" event={"ID":"7d156614-4cec-4d35-b8a2-733af21d9b61","Type":"ContainerDied","Data":"f85db6dc154667d5bd3de692cd57704411324615744603a9986495255c9cf316"} Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.554168 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.576521 5110 scope.go:117] "RemoveContainer" containerID="f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527" Jan 22 14:20:33 crc kubenswrapper[5110]: E0122 14:20:33.577581 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527\": container with ID starting with f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527 not found: ID does not exist" containerID="f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.577612 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527"} err="failed to get container status \"f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527\": rpc error: code = NotFound desc = could not find container \"f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527\": container with ID starting with f4ff7561cf5c59850a30f90b1547f98ad64605bbe6f4a1bf2ab3dd8b01878527 not found: ID does not exist" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.577676 5110 scope.go:117] "RemoveContainer" containerID="b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.588763 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2"] Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.593697 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65989d4b98-2pvn2"] Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.602643 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd"] Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.607047 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69dc94bbc8-t8zcd"] Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.611768 5110 scope.go:117] "RemoveContainer" containerID="b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316" Jan 22 14:20:33 crc kubenswrapper[5110]: E0122 14:20:33.612197 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316\": container with ID starting with b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316 not found: ID does not exist" containerID="b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.612229 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316"} err="failed to get container status \"b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316\": rpc error: code = NotFound desc = could not find container \"b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316\": container with ID starting with b4c98d8dadcc2f760b10f0fd0e48a091957d04de41b067ef0d0e356b1d284316 not found: ID does not exist" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.646462 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.821466 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh"] Jan 22 14:20:33 crc kubenswrapper[5110]: W0122 14:20:33.826342 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09401ea1_d993_410a_9177_326d83efe29f.slice/crio-1e17eed2905e46d11d86b209c63e22e8b9031b6ea57e585eb8f8b8ea6a8b1e59 WatchSource:0}: Error finding container 1e17eed2905e46d11d86b209c63e22e8b9031b6ea57e585eb8f8b8ea6a8b1e59: Status 404 returned error can't find the container with id 1e17eed2905e46d11d86b209c63e22e8b9031b6ea57e585eb8f8b8ea6a8b1e59 Jan 22 14:20:33 crc kubenswrapper[5110]: I0122 14:20:33.828594 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.036236 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59b6f7d894-jtb6k"] Jan 22 14:20:34 crc kubenswrapper[5110]: W0122 14:20:34.042948 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1be09e9_8664_4bf9_be87_dcf64ecf4cd1.slice/crio-df10b75ca57f856fa15a7ba64b4d580c12985ddf17fe1bf4fe75081f74feeb87 WatchSource:0}: Error finding container df10b75ca57f856fa15a7ba64b4d580c12985ddf17fe1bf4fe75081f74feeb87: Status 404 returned error can't find the container with id df10b75ca57f856fa15a7ba64b4d580c12985ddf17fe1bf4fe75081f74feeb87 Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.279735 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9bbbb8-5106-45dc-ac4b-89f80a345518" path="/var/lib/kubelet/pods/3f9bbbb8-5106-45dc-ac4b-89f80a345518/volumes" Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.280565 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d156614-4cec-4d35-b8a2-733af21d9b61" path="/var/lib/kubelet/pods/7d156614-4cec-4d35-b8a2-733af21d9b61/volumes" Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.561180 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" event={"ID":"09401ea1-d993-410a-9177-326d83efe29f","Type":"ContainerStarted","Data":"f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06"} Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.561228 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" event={"ID":"09401ea1-d993-410a-9177-326d83efe29f","Type":"ContainerStarted","Data":"1e17eed2905e46d11d86b209c63e22e8b9031b6ea57e585eb8f8b8ea6a8b1e59"} Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.561945 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.565616 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" event={"ID":"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1","Type":"ContainerStarted","Data":"5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e"} Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.565681 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" event={"ID":"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1","Type":"ContainerStarted","Data":"df10b75ca57f856fa15a7ba64b4d580c12985ddf17fe1bf4fe75081f74feeb87"} Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.565864 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.570001 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.599925 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" podStartSLOduration=2.59990552 podStartE2EDuration="2.59990552s" podCreationTimestamp="2026-01-22 14:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:20:34.583399374 +0000 UTC m=+314.805483753" watchObservedRunningTime="2026-01-22 14:20:34.59990552 +0000 UTC m=+314.821989879" Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.601814 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" podStartSLOduration=2.601807331 podStartE2EDuration="2.601807331s" podCreationTimestamp="2026-01-22 14:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:20:34.599229893 +0000 UTC m=+314.821314262" watchObservedRunningTime="2026-01-22 14:20:34.601807331 +0000 UTC m=+314.823891690" Jan 22 14:20:34 crc kubenswrapper[5110]: I0122 14:20:34.722280 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.035153 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59b6f7d894-jtb6k"] Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.036413 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" podUID="d1be09e9-8664-4bf9-be87-dcf64ecf4cd1" containerName="controller-manager" containerID="cri-o://5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e" gracePeriod=30 Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.133507 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh"] Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.133866 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" podUID="09401ea1-d993-410a-9177-326d83efe29f" containerName="route-controller-manager" containerID="cri-o://f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06" gracePeriod=30 Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.589924 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.600512 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.603721 5110 generic.go:358] "Generic (PLEG): container finished" podID="d1be09e9-8664-4bf9-be87-dcf64ecf4cd1" containerID="5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e" exitCode=0 Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.604023 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" event={"ID":"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1","Type":"ContainerDied","Data":"5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e"} Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.604084 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" event={"ID":"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1","Type":"ContainerDied","Data":"df10b75ca57f856fa15a7ba64b4d580c12985ddf17fe1bf4fe75081f74feeb87"} Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.604111 5110 scope.go:117] "RemoveContainer" containerID="5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.604342 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59b6f7d894-jtb6k" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.611005 5110 generic.go:358] "Generic (PLEG): container finished" podID="09401ea1-d993-410a-9177-326d83efe29f" containerID="f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06" exitCode=0 Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.611216 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" event={"ID":"09401ea1-d993-410a-9177-326d83efe29f","Type":"ContainerDied","Data":"f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06"} Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.611251 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" event={"ID":"09401ea1-d993-410a-9177-326d83efe29f","Type":"ContainerDied","Data":"1e17eed2905e46d11d86b209c63e22e8b9031b6ea57e585eb8f8b8ea6a8b1e59"} Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.611351 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.624454 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09401ea1-d993-410a-9177-326d83efe29f-config\") pod \"09401ea1-d993-410a-9177-326d83efe29f\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.624588 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09401ea1-d993-410a-9177-326d83efe29f-client-ca\") pod \"09401ea1-d993-410a-9177-326d83efe29f\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.624690 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09401ea1-d993-410a-9177-326d83efe29f-tmp\") pod \"09401ea1-d993-410a-9177-326d83efe29f\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.624769 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ns2r\" (UniqueName: \"kubernetes.io/projected/09401ea1-d993-410a-9177-326d83efe29f-kube-api-access-6ns2r\") pod \"09401ea1-d993-410a-9177-326d83efe29f\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.624798 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09401ea1-d993-410a-9177-326d83efe29f-serving-cert\") pod \"09401ea1-d993-410a-9177-326d83efe29f\" (UID: \"09401ea1-d993-410a-9177-326d83efe29f\") " Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.626047 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09401ea1-d993-410a-9177-326d83efe29f-config" (OuterVolumeSpecName: "config") pod "09401ea1-d993-410a-9177-326d83efe29f" (UID: "09401ea1-d993-410a-9177-326d83efe29f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.626455 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6"] Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.627454 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="09401ea1-d993-410a-9177-326d83efe29f" containerName="route-controller-manager" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.627479 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="09401ea1-d993-410a-9177-326d83efe29f" containerName="route-controller-manager" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.627503 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1be09e9-8664-4bf9-be87-dcf64ecf4cd1" containerName="controller-manager" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.627514 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1be09e9-8664-4bf9-be87-dcf64ecf4cd1" containerName="controller-manager" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.627737 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="d1be09e9-8664-4bf9-be87-dcf64ecf4cd1" containerName="controller-manager" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.627759 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="09401ea1-d993-410a-9177-326d83efe29f" containerName="route-controller-manager" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.631198 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09401ea1-d993-410a-9177-326d83efe29f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09401ea1-d993-410a-9177-326d83efe29f" (UID: "09401ea1-d993-410a-9177-326d83efe29f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.631286 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09401ea1-d993-410a-9177-326d83efe29f-tmp" (OuterVolumeSpecName: "tmp") pod "09401ea1-d993-410a-9177-326d83efe29f" (UID: "09401ea1-d993-410a-9177-326d83efe29f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.631398 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09401ea1-d993-410a-9177-326d83efe29f-client-ca" (OuterVolumeSpecName: "client-ca") pod "09401ea1-d993-410a-9177-326d83efe29f" (UID: "09401ea1-d993-410a-9177-326d83efe29f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.636596 5110 scope.go:117] "RemoveContainer" containerID="5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.636677 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09401ea1-d993-410a-9177-326d83efe29f-kube-api-access-6ns2r" (OuterVolumeSpecName: "kube-api-access-6ns2r") pod "09401ea1-d993-410a-9177-326d83efe29f" (UID: "09401ea1-d993-410a-9177-326d83efe29f"). InnerVolumeSpecName "kube-api-access-6ns2r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:20:40 crc kubenswrapper[5110]: E0122 14:20:40.640789 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e\": container with ID starting with 5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e not found: ID does not exist" containerID="5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.640984 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e"} err="failed to get container status \"5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e\": rpc error: code = NotFound desc = could not find container \"5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e\": container with ID starting with 5046ab698ca8f002faf4b531e9b670d1d747c4ca2f4581d104466024abf3ea0e not found: ID does not exist" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.641066 5110 scope.go:117] "RemoveContainer" containerID="f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.648907 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6"] Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.649084 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.664808 5110 scope.go:117] "RemoveContainer" containerID="f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06" Jan 22 14:20:40 crc kubenswrapper[5110]: E0122 14:20:40.665265 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06\": container with ID starting with f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06 not found: ID does not exist" containerID="f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.665297 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06"} err="failed to get container status \"f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06\": rpc error: code = NotFound desc = could not find container \"f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06\": container with ID starting with f015e5923fd28858624f3458aa7313a77b859dd342043352813d6ad059ec4c06 not found: ID does not exist" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.675582 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d7655bff6-27hcb"] Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.681798 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.682317 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7655bff6-27hcb"] Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.725911 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-proxy-ca-bundles\") pod \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.725972 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-config\") pod \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.725993 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-tmp\") pod \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726052 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpg48\" (UniqueName: \"kubernetes.io/projected/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-kube-api-access-xpg48\") pod \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726111 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-client-ca\") pod \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726128 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-serving-cert\") pod \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\" (UID: \"d1be09e9-8664-4bf9-be87-dcf64ecf4cd1\") " Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726263 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-config\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726295 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngntm\" (UniqueName: \"kubernetes.io/projected/d84ab683-e38c-4b5e-afaf-0b5522e08663-kube-api-access-ngntm\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726332 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-tmp\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726351 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-proxy-ca-bundles\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726375 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-serving-cert\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726396 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d84ab683-e38c-4b5e-afaf-0b5522e08663-tmp\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726418 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d84ab683-e38c-4b5e-afaf-0b5522e08663-serving-cert\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726453 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-client-ca\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726468 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcnpw\" (UniqueName: \"kubernetes.io/projected/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-kube-api-access-xcnpw\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726499 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-config\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726512 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-client-ca\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726545 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09401ea1-d993-410a-9177-326d83efe29f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726555 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/09401ea1-d993-410a-9177-326d83efe29f-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726564 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6ns2r\" (UniqueName: \"kubernetes.io/projected/09401ea1-d993-410a-9177-326d83efe29f-kube-api-access-6ns2r\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726573 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09401ea1-d993-410a-9177-326d83efe29f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726582 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09401ea1-d993-410a-9177-326d83efe29f-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726677 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-tmp" (OuterVolumeSpecName: "tmp") pod "d1be09e9-8664-4bf9-be87-dcf64ecf4cd1" (UID: "d1be09e9-8664-4bf9-be87-dcf64ecf4cd1"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.726951 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d1be09e9-8664-4bf9-be87-dcf64ecf4cd1" (UID: "d1be09e9-8664-4bf9-be87-dcf64ecf4cd1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.727011 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-client-ca" (OuterVolumeSpecName: "client-ca") pod "d1be09e9-8664-4bf9-be87-dcf64ecf4cd1" (UID: "d1be09e9-8664-4bf9-be87-dcf64ecf4cd1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.727048 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-config" (OuterVolumeSpecName: "config") pod "d1be09e9-8664-4bf9-be87-dcf64ecf4cd1" (UID: "d1be09e9-8664-4bf9-be87-dcf64ecf4cd1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.729220 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d1be09e9-8664-4bf9-be87-dcf64ecf4cd1" (UID: "d1be09e9-8664-4bf9-be87-dcf64ecf4cd1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.729372 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-kube-api-access-xpg48" (OuterVolumeSpecName: "kube-api-access-xpg48") pod "d1be09e9-8664-4bf9-be87-dcf64ecf4cd1" (UID: "d1be09e9-8664-4bf9-be87-dcf64ecf4cd1"). InnerVolumeSpecName "kube-api-access-xpg48". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828349 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ngntm\" (UniqueName: \"kubernetes.io/projected/d84ab683-e38c-4b5e-afaf-0b5522e08663-kube-api-access-ngntm\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828429 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-tmp\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828455 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-proxy-ca-bundles\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828482 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-serving-cert\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828508 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d84ab683-e38c-4b5e-afaf-0b5522e08663-tmp\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828533 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d84ab683-e38c-4b5e-afaf-0b5522e08663-serving-cert\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828576 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-client-ca\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828595 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xcnpw\" (UniqueName: \"kubernetes.io/projected/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-kube-api-access-xcnpw\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828702 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-config\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828723 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-client-ca\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828749 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-config\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828794 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828806 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828816 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828826 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xpg48\" (UniqueName: \"kubernetes.io/projected/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-kube-api-access-xpg48\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828837 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.828846 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.829147 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-tmp\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.830118 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-config\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.831252 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d84ab683-e38c-4b5e-afaf-0b5522e08663-tmp\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.831612 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-client-ca\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.831735 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-config\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.832087 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-proxy-ca-bundles\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.832310 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-client-ca\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.833516 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d84ab683-e38c-4b5e-afaf-0b5522e08663-serving-cert\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.843119 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-serving-cert\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.846279 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngntm\" (UniqueName: \"kubernetes.io/projected/d84ab683-e38c-4b5e-afaf-0b5522e08663-kube-api-access-ngntm\") pod \"controller-manager-d7655bff6-27hcb\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.855459 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcnpw\" (UniqueName: \"kubernetes.io/projected/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-kube-api-access-xcnpw\") pod \"route-controller-manager-7f8779cff8-mbqj6\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.934955 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59b6f7d894-jtb6k"] Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.940348 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59b6f7d894-jtb6k"] Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.946057 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh"] Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.952328 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c57c57c4-ljgqh"] Jan 22 14:20:40 crc kubenswrapper[5110]: I0122 14:20:40.977647 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:41 crc kubenswrapper[5110]: I0122 14:20:41.002225 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:41 crc kubenswrapper[5110]: I0122 14:20:41.229876 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7655bff6-27hcb"] Jan 22 14:20:41 crc kubenswrapper[5110]: I0122 14:20:41.365729 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6"] Jan 22 14:20:41 crc kubenswrapper[5110]: I0122 14:20:41.617465 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" event={"ID":"d84ab683-e38c-4b5e-afaf-0b5522e08663","Type":"ContainerStarted","Data":"3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2"} Jan 22 14:20:41 crc kubenswrapper[5110]: I0122 14:20:41.617875 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:41 crc kubenswrapper[5110]: I0122 14:20:41.617891 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" event={"ID":"d84ab683-e38c-4b5e-afaf-0b5522e08663","Type":"ContainerStarted","Data":"ef6390a2a9789f64798e0d9025255b18386c98f7ccb921d27591be92803f2609"} Jan 22 14:20:41 crc kubenswrapper[5110]: I0122 14:20:41.620766 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" event={"ID":"0679aae3-6731-4b02-a338-bc2e7f1e9c0f","Type":"ContainerStarted","Data":"90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34"} Jan 22 14:20:41 crc kubenswrapper[5110]: I0122 14:20:41.620799 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" event={"ID":"0679aae3-6731-4b02-a338-bc2e7f1e9c0f","Type":"ContainerStarted","Data":"7187c909d76d09cdabf5ef8b784c3fecddd6eb2c48f650bdaabfc30debe51751"} Jan 22 14:20:41 crc kubenswrapper[5110]: I0122 14:20:41.621387 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:41 crc kubenswrapper[5110]: I0122 14:20:41.639055 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" podStartSLOduration=1.639040998 podStartE2EDuration="1.639040998s" podCreationTimestamp="2026-01-22 14:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:20:41.636253125 +0000 UTC m=+321.858337484" watchObservedRunningTime="2026-01-22 14:20:41.639040998 +0000 UTC m=+321.861125357" Jan 22 14:20:41 crc kubenswrapper[5110]: I0122 14:20:41.951116 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:41 crc kubenswrapper[5110]: I0122 14:20:41.984455 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" podStartSLOduration=1.984438059 podStartE2EDuration="1.984438059s" podCreationTimestamp="2026-01-22 14:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:20:41.657495026 +0000 UTC m=+321.879579395" watchObservedRunningTime="2026-01-22 14:20:41.984438059 +0000 UTC m=+322.206522418" Jan 22 14:20:42 crc kubenswrapper[5110]: I0122 14:20:42.288096 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09401ea1-d993-410a-9177-326d83efe29f" path="/var/lib/kubelet/pods/09401ea1-d993-410a-9177-326d83efe29f/volumes" Jan 22 14:20:42 crc kubenswrapper[5110]: I0122 14:20:42.289572 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1be09e9-8664-4bf9-be87-dcf64ecf4cd1" path="/var/lib/kubelet/pods/d1be09e9-8664-4bf9-be87-dcf64ecf4cd1/volumes" Jan 22 14:20:42 crc kubenswrapper[5110]: I0122 14:20:42.474285 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:52 crc kubenswrapper[5110]: I0122 14:20:52.585191 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d7655bff6-27hcb"] Jan 22 14:20:52 crc kubenswrapper[5110]: I0122 14:20:52.586213 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" podUID="d84ab683-e38c-4b5e-afaf-0b5522e08663" containerName="controller-manager" containerID="cri-o://3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2" gracePeriod=30 Jan 22 14:20:52 crc kubenswrapper[5110]: I0122 14:20:52.603866 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6"] Jan 22 14:20:52 crc kubenswrapper[5110]: I0122 14:20:52.604327 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" podUID="0679aae3-6731-4b02-a338-bc2e7f1e9c0f" containerName="route-controller-manager" containerID="cri-o://90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34" gracePeriod=30 Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.066395 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.096191 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh"] Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.097173 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0679aae3-6731-4b02-a338-bc2e7f1e9c0f" containerName="route-controller-manager" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.097293 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="0679aae3-6731-4b02-a338-bc2e7f1e9c0f" containerName="route-controller-manager" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.097464 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="0679aae3-6731-4b02-a338-bc2e7f1e9c0f" containerName="route-controller-manager" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.100545 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.111740 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh"] Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.194680 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-config\") pod \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.194744 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-tmp\") pod \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.194785 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-client-ca\") pod \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.194862 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-serving-cert\") pod \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.194965 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcnpw\" (UniqueName: \"kubernetes.io/projected/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-kube-api-access-xcnpw\") pod \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\" (UID: \"0679aae3-6731-4b02-a338-bc2e7f1e9c0f\") " Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.195083 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250c4d10-a134-4ee6-970c-c0db68fbcc04-config\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.195116 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/250c4d10-a134-4ee6-970c-c0db68fbcc04-serving-cert\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.195148 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/250c4d10-a134-4ee6-970c-c0db68fbcc04-tmp\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.195180 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwksq\" (UniqueName: \"kubernetes.io/projected/250c4d10-a134-4ee6-970c-c0db68fbcc04-kube-api-access-vwksq\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.195206 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/250c4d10-a134-4ee6-970c-c0db68fbcc04-client-ca\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.195234 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-tmp" (OuterVolumeSpecName: "tmp") pod "0679aae3-6731-4b02-a338-bc2e7f1e9c0f" (UID: "0679aae3-6731-4b02-a338-bc2e7f1e9c0f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.195530 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-client-ca" (OuterVolumeSpecName: "client-ca") pod "0679aae3-6731-4b02-a338-bc2e7f1e9c0f" (UID: "0679aae3-6731-4b02-a338-bc2e7f1e9c0f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.195934 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-config" (OuterVolumeSpecName: "config") pod "0679aae3-6731-4b02-a338-bc2e7f1e9c0f" (UID: "0679aae3-6731-4b02-a338-bc2e7f1e9c0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.200585 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0679aae3-6731-4b02-a338-bc2e7f1e9c0f" (UID: "0679aae3-6731-4b02-a338-bc2e7f1e9c0f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.204327 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-kube-api-access-xcnpw" (OuterVolumeSpecName: "kube-api-access-xcnpw") pod "0679aae3-6731-4b02-a338-bc2e7f1e9c0f" (UID: "0679aae3-6731-4b02-a338-bc2e7f1e9c0f"). InnerVolumeSpecName "kube-api-access-xcnpw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.251280 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.279769 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59b6f7d894-d4f7s"] Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.280448 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d84ab683-e38c-4b5e-afaf-0b5522e08663" containerName="controller-manager" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.280477 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84ab683-e38c-4b5e-afaf-0b5522e08663" containerName="controller-manager" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.280634 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="d84ab683-e38c-4b5e-afaf-0b5522e08663" containerName="controller-manager" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.296407 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/250c4d10-a134-4ee6-970c-c0db68fbcc04-client-ca\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.296698 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250c4d10-a134-4ee6-970c-c0db68fbcc04-config\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.296831 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/250c4d10-a134-4ee6-970c-c0db68fbcc04-serving-cert\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.297432 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/250c4d10-a134-4ee6-970c-c0db68fbcc04-tmp\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.297570 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwksq\" (UniqueName: \"kubernetes.io/projected/250c4d10-a134-4ee6-970c-c0db68fbcc04-kube-api-access-vwksq\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.297743 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xcnpw\" (UniqueName: \"kubernetes.io/projected/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-kube-api-access-xcnpw\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.297833 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.297897 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.297957 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.298019 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0679aae3-6731-4b02-a338-bc2e7f1e9c0f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.297778 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/250c4d10-a134-4ee6-970c-c0db68fbcc04-client-ca\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.297847 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/250c4d10-a134-4ee6-970c-c0db68fbcc04-tmp\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.298427 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250c4d10-a134-4ee6-970c-c0db68fbcc04-config\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.299334 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59b6f7d894-d4f7s"] Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.299475 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.302431 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/250c4d10-a134-4ee6-970c-c0db68fbcc04-serving-cert\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.315973 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwksq\" (UniqueName: \"kubernetes.io/projected/250c4d10-a134-4ee6-970c-c0db68fbcc04-kube-api-access-vwksq\") pod \"route-controller-manager-58c57c57c4-dh4jh\" (UID: \"250c4d10-a134-4ee6-970c-c0db68fbcc04\") " pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.399113 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-client-ca\") pod \"d84ab683-e38c-4b5e-afaf-0b5522e08663\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.399162 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d84ab683-e38c-4b5e-afaf-0b5522e08663-serving-cert\") pod \"d84ab683-e38c-4b5e-afaf-0b5522e08663\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.399293 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-config\") pod \"d84ab683-e38c-4b5e-afaf-0b5522e08663\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.399335 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-proxy-ca-bundles\") pod \"d84ab683-e38c-4b5e-afaf-0b5522e08663\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.399374 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngntm\" (UniqueName: \"kubernetes.io/projected/d84ab683-e38c-4b5e-afaf-0b5522e08663-kube-api-access-ngntm\") pod \"d84ab683-e38c-4b5e-afaf-0b5522e08663\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.399398 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d84ab683-e38c-4b5e-afaf-0b5522e08663-tmp\") pod \"d84ab683-e38c-4b5e-afaf-0b5522e08663\" (UID: \"d84ab683-e38c-4b5e-afaf-0b5522e08663\") " Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.399487 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-client-ca\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.399568 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th89d\" (UniqueName: \"kubernetes.io/projected/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-kube-api-access-th89d\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.399590 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-serving-cert\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.399604 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-tmp\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.399692 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-config\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.399726 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-proxy-ca-bundles\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.400263 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d84ab683-e38c-4b5e-afaf-0b5522e08663" (UID: "d84ab683-e38c-4b5e-afaf-0b5522e08663"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.400426 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-config" (OuterVolumeSpecName: "config") pod "d84ab683-e38c-4b5e-afaf-0b5522e08663" (UID: "d84ab683-e38c-4b5e-afaf-0b5522e08663"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.400556 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84ab683-e38c-4b5e-afaf-0b5522e08663-tmp" (OuterVolumeSpecName: "tmp") pod "d84ab683-e38c-4b5e-afaf-0b5522e08663" (UID: "d84ab683-e38c-4b5e-afaf-0b5522e08663"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.400694 5110 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.400719 5110 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.401041 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-client-ca" (OuterVolumeSpecName: "client-ca") pod "d84ab683-e38c-4b5e-afaf-0b5522e08663" (UID: "d84ab683-e38c-4b5e-afaf-0b5522e08663"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.402501 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84ab683-e38c-4b5e-afaf-0b5522e08663-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d84ab683-e38c-4b5e-afaf-0b5522e08663" (UID: "d84ab683-e38c-4b5e-afaf-0b5522e08663"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.403241 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84ab683-e38c-4b5e-afaf-0b5522e08663-kube-api-access-ngntm" (OuterVolumeSpecName: "kube-api-access-ngntm") pod "d84ab683-e38c-4b5e-afaf-0b5522e08663" (UID: "d84ab683-e38c-4b5e-afaf-0b5522e08663"). InnerVolumeSpecName "kube-api-access-ngntm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.424970 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.502313 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-config\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.502686 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-proxy-ca-bundles\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.502744 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-client-ca\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.502796 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-th89d\" (UniqueName: \"kubernetes.io/projected/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-kube-api-access-th89d\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.502826 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-serving-cert\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.502849 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-tmp\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.502909 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ngntm\" (UniqueName: \"kubernetes.io/projected/d84ab683-e38c-4b5e-afaf-0b5522e08663-kube-api-access-ngntm\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.502923 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d84ab683-e38c-4b5e-afaf-0b5522e08663-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.502936 5110 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d84ab683-e38c-4b5e-afaf-0b5522e08663-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.502946 5110 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d84ab683-e38c-4b5e-afaf-0b5522e08663-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.504784 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-config\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.504983 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-proxy-ca-bundles\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.505696 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-client-ca\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.511170 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-tmp\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.516923 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-serving-cert\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.526760 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-th89d\" (UniqueName: \"kubernetes.io/projected/4389fde5-9aa6-4ae6-b993-6710ec8d92d2-kube-api-access-th89d\") pod \"controller-manager-59b6f7d894-d4f7s\" (UID: \"4389fde5-9aa6-4ae6-b993-6710ec8d92d2\") " pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.616843 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.660728 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh"] Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.702461 5110 generic.go:358] "Generic (PLEG): container finished" podID="0679aae3-6731-4b02-a338-bc2e7f1e9c0f" containerID="90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34" exitCode=0 Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.702526 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.702540 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" event={"ID":"0679aae3-6731-4b02-a338-bc2e7f1e9c0f","Type":"ContainerDied","Data":"90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34"} Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.702576 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6" event={"ID":"0679aae3-6731-4b02-a338-bc2e7f1e9c0f","Type":"ContainerDied","Data":"7187c909d76d09cdabf5ef8b784c3fecddd6eb2c48f650bdaabfc30debe51751"} Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.702601 5110 scope.go:117] "RemoveContainer" containerID="90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.704305 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" event={"ID":"250c4d10-a134-4ee6-970c-c0db68fbcc04","Type":"ContainerStarted","Data":"fb63e55a499ce30d075f98cd147b68ec84b408bf188769b641ad09edc883b15c"} Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.708242 5110 generic.go:358] "Generic (PLEG): container finished" podID="d84ab683-e38c-4b5e-afaf-0b5522e08663" containerID="3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2" exitCode=0 Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.708289 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" event={"ID":"d84ab683-e38c-4b5e-afaf-0b5522e08663","Type":"ContainerDied","Data":"3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2"} Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.708319 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" event={"ID":"d84ab683-e38c-4b5e-afaf-0b5522e08663","Type":"ContainerDied","Data":"ef6390a2a9789f64798e0d9025255b18386c98f7ccb921d27591be92803f2609"} Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.708415 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7655bff6-27hcb" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.729549 5110 scope.go:117] "RemoveContainer" containerID="90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34" Jan 22 14:20:53 crc kubenswrapper[5110]: E0122 14:20:53.730069 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34\": container with ID starting with 90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34 not found: ID does not exist" containerID="90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.730112 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34"} err="failed to get container status \"90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34\": rpc error: code = NotFound desc = could not find container \"90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34\": container with ID starting with 90cc3075cdda5fbb25d0c399b7ba5036fd697385846fa3f090a3458e0da6ff34 not found: ID does not exist" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.730142 5110 scope.go:117] "RemoveContainer" containerID="3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.746976 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6"] Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.757875 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8779cff8-mbqj6"] Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.760170 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d7655bff6-27hcb"] Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.763536 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d7655bff6-27hcb"] Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.765057 5110 scope.go:117] "RemoveContainer" containerID="3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2" Jan 22 14:20:53 crc kubenswrapper[5110]: E0122 14:20:53.765508 5110 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2\": container with ID starting with 3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2 not found: ID does not exist" containerID="3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2" Jan 22 14:20:53 crc kubenswrapper[5110]: I0122 14:20:53.765536 5110 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2"} err="failed to get container status \"3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2\": rpc error: code = NotFound desc = could not find container \"3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2\": container with ID starting with 3a551b1264c9a6269a843622e05a6430940eadec79b60787deb095e1c1a87ee2 not found: ID does not exist" Jan 22 14:20:54 crc kubenswrapper[5110]: W0122 14:20:54.040203 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4389fde5_9aa6_4ae6_b993_6710ec8d92d2.slice/crio-f521a0143c575283eedf108da51382abea2e0288720be7c209dc0f4bf49404a8 WatchSource:0}: Error finding container f521a0143c575283eedf108da51382abea2e0288720be7c209dc0f4bf49404a8: Status 404 returned error can't find the container with id f521a0143c575283eedf108da51382abea2e0288720be7c209dc0f4bf49404a8 Jan 22 14:20:54 crc kubenswrapper[5110]: I0122 14:20:54.052176 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59b6f7d894-d4f7s"] Jan 22 14:20:54 crc kubenswrapper[5110]: I0122 14:20:54.284034 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0679aae3-6731-4b02-a338-bc2e7f1e9c0f" path="/var/lib/kubelet/pods/0679aae3-6731-4b02-a338-bc2e7f1e9c0f/volumes" Jan 22 14:20:54 crc kubenswrapper[5110]: I0122 14:20:54.285985 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84ab683-e38c-4b5e-afaf-0b5522e08663" path="/var/lib/kubelet/pods/d84ab683-e38c-4b5e-afaf-0b5522e08663/volumes" Jan 22 14:20:54 crc kubenswrapper[5110]: I0122 14:20:54.714487 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" event={"ID":"4389fde5-9aa6-4ae6-b993-6710ec8d92d2","Type":"ContainerStarted","Data":"bed66aa6f04d82076bd0ee07b631aae4ebe4fea14bf1e1d0d64ac15c6c945370"} Jan 22 14:20:54 crc kubenswrapper[5110]: I0122 14:20:54.714770 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" event={"ID":"4389fde5-9aa6-4ae6-b993-6710ec8d92d2","Type":"ContainerStarted","Data":"f521a0143c575283eedf108da51382abea2e0288720be7c209dc0f4bf49404a8"} Jan 22 14:20:54 crc kubenswrapper[5110]: I0122 14:20:54.714795 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:20:54 crc kubenswrapper[5110]: I0122 14:20:54.716751 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" event={"ID":"250c4d10-a134-4ee6-970c-c0db68fbcc04","Type":"ContainerStarted","Data":"026ef13e37c1c71fdaf88dfe276690d2fe1ca9fddd7e8f3633700ba4d2318d6e"} Jan 22 14:20:54 crc kubenswrapper[5110]: I0122 14:20:54.717036 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:54 crc kubenswrapper[5110]: I0122 14:20:54.734614 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" Jan 22 14:20:54 crc kubenswrapper[5110]: I0122 14:20:54.745520 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" podStartSLOduration=2.745499753 podStartE2EDuration="2.745499753s" podCreationTimestamp="2026-01-22 14:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:20:54.740735897 +0000 UTC m=+334.962820276" watchObservedRunningTime="2026-01-22 14:20:54.745499753 +0000 UTC m=+334.967584112" Jan 22 14:20:54 crc kubenswrapper[5110]: I0122 14:20:54.763358 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58c57c57c4-dh4jh" podStartSLOduration=2.763336227 podStartE2EDuration="2.763336227s" podCreationTimestamp="2026-01-22 14:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:20:54.755422107 +0000 UTC m=+334.977506466" watchObservedRunningTime="2026-01-22 14:20:54.763336227 +0000 UTC m=+334.985420586" Jan 22 14:20:54 crc kubenswrapper[5110]: I0122 14:20:54.869022 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59b6f7d894-d4f7s" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.547825 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hxzv2"] Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.552706 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hxzv2" podUID="d099c037-9022-46df-8a66-e2856ee9dbd9" containerName="registry-server" containerID="cri-o://2a3585376f58dfda80a1c499c20d35ff254bcc810248cf3f4818892320a95885" gracePeriod=30 Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.569841 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6z98k"] Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.570636 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6z98k" podUID="e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" containerName="registry-server" containerID="cri-o://919e3d2a8af0cde2a617c7092598cb61e1cec71cdb1e3a7a3a1d2cf9a4eef276" gracePeriod=30 Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.577515 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-74rfw"] Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.577919 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" podUID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" containerName="marketplace-operator" containerID="cri-o://3dc86dc50f5848a37e24e1fa27b22f8abc3b2972b4837206e372864b64b8e0a2" gracePeriod=30 Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.585082 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk6fr"] Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.585519 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xk6fr" podUID="0c5d2009-8141-4816-b0d1-350eaee192ef" containerName="registry-server" containerID="cri-o://58c73875a2558fceeeca0cfff3502270af75292ff4fdb8d459bf5052873448e8" gracePeriod=30 Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.589174 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wwt5t"] Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.589495 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wwt5t" podUID="39b263fe-d08c-46d3-ba73-30ad3e8deec1" containerName="registry-server" containerID="cri-o://938e023755b1d0b6046a3ce5116f8dfcb783679ba287bf10d9661e111e0a2927" gracePeriod=30 Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.600668 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bnws7"] Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.614128 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bnws7"] Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.614472 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.763309 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/178a27ab-0bb7-4a69-ad49-57ed121ce165-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-bnws7\" (UID: \"178a27ab-0bb7-4a69-ad49-57ed121ce165\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.763411 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sfmp\" (UniqueName: \"kubernetes.io/projected/178a27ab-0bb7-4a69-ad49-57ed121ce165-kube-api-access-6sfmp\") pod \"marketplace-operator-547dbd544d-bnws7\" (UID: \"178a27ab-0bb7-4a69-ad49-57ed121ce165\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.763538 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/178a27ab-0bb7-4a69-ad49-57ed121ce165-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-bnws7\" (UID: \"178a27ab-0bb7-4a69-ad49-57ed121ce165\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.763691 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/178a27ab-0bb7-4a69-ad49-57ed121ce165-tmp\") pod \"marketplace-operator-547dbd544d-bnws7\" (UID: \"178a27ab-0bb7-4a69-ad49-57ed121ce165\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.804710 5110 generic.go:358] "Generic (PLEG): container finished" podID="d099c037-9022-46df-8a66-e2856ee9dbd9" containerID="2a3585376f58dfda80a1c499c20d35ff254bcc810248cf3f4818892320a95885" exitCode=0 Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.804785 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzv2" event={"ID":"d099c037-9022-46df-8a66-e2856ee9dbd9","Type":"ContainerDied","Data":"2a3585376f58dfda80a1c499c20d35ff254bcc810248cf3f4818892320a95885"} Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.806540 5110 generic.go:358] "Generic (PLEG): container finished" podID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" containerID="3dc86dc50f5848a37e24e1fa27b22f8abc3b2972b4837206e372864b64b8e0a2" exitCode=0 Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.806556 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" event={"ID":"3adc63ca-ac54-461a-9a91-10ba0b85fa2b","Type":"ContainerDied","Data":"3dc86dc50f5848a37e24e1fa27b22f8abc3b2972b4837206e372864b64b8e0a2"} Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.806601 5110 scope.go:117] "RemoveContainer" containerID="9531453046d5270ca61029f306a797316355364b48f372a98cccf355e8005f9e" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.809512 5110 generic.go:358] "Generic (PLEG): container finished" podID="e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" containerID="919e3d2a8af0cde2a617c7092598cb61e1cec71cdb1e3a7a3a1d2cf9a4eef276" exitCode=0 Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.809593 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6z98k" event={"ID":"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7","Type":"ContainerDied","Data":"919e3d2a8af0cde2a617c7092598cb61e1cec71cdb1e3a7a3a1d2cf9a4eef276"} Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.814102 5110 generic.go:358] "Generic (PLEG): container finished" podID="0c5d2009-8141-4816-b0d1-350eaee192ef" containerID="58c73875a2558fceeeca0cfff3502270af75292ff4fdb8d459bf5052873448e8" exitCode=0 Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.814136 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk6fr" event={"ID":"0c5d2009-8141-4816-b0d1-350eaee192ef","Type":"ContainerDied","Data":"58c73875a2558fceeeca0cfff3502270af75292ff4fdb8d459bf5052873448e8"} Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.818175 5110 generic.go:358] "Generic (PLEG): container finished" podID="39b263fe-d08c-46d3-ba73-30ad3e8deec1" containerID="938e023755b1d0b6046a3ce5116f8dfcb783679ba287bf10d9661e111e0a2927" exitCode=0 Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.818228 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwt5t" event={"ID":"39b263fe-d08c-46d3-ba73-30ad3e8deec1","Type":"ContainerDied","Data":"938e023755b1d0b6046a3ce5116f8dfcb783679ba287bf10d9661e111e0a2927"} Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.865424 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/178a27ab-0bb7-4a69-ad49-57ed121ce165-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-bnws7\" (UID: \"178a27ab-0bb7-4a69-ad49-57ed121ce165\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.865514 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/178a27ab-0bb7-4a69-ad49-57ed121ce165-tmp\") pod \"marketplace-operator-547dbd544d-bnws7\" (UID: \"178a27ab-0bb7-4a69-ad49-57ed121ce165\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.865578 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/178a27ab-0bb7-4a69-ad49-57ed121ce165-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-bnws7\" (UID: \"178a27ab-0bb7-4a69-ad49-57ed121ce165\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.865615 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6sfmp\" (UniqueName: \"kubernetes.io/projected/178a27ab-0bb7-4a69-ad49-57ed121ce165-kube-api-access-6sfmp\") pod \"marketplace-operator-547dbd544d-bnws7\" (UID: \"178a27ab-0bb7-4a69-ad49-57ed121ce165\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.866451 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/178a27ab-0bb7-4a69-ad49-57ed121ce165-tmp\") pod \"marketplace-operator-547dbd544d-bnws7\" (UID: \"178a27ab-0bb7-4a69-ad49-57ed121ce165\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.866963 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/178a27ab-0bb7-4a69-ad49-57ed121ce165-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-bnws7\" (UID: \"178a27ab-0bb7-4a69-ad49-57ed121ce165\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.875132 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/178a27ab-0bb7-4a69-ad49-57ed121ce165-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-bnws7\" (UID: \"178a27ab-0bb7-4a69-ad49-57ed121ce165\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:10 crc kubenswrapper[5110]: I0122 14:21:10.885280 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sfmp\" (UniqueName: \"kubernetes.io/projected/178a27ab-0bb7-4a69-ad49-57ed121ce165-kube-api-access-6sfmp\") pod \"marketplace-operator-547dbd544d-bnws7\" (UID: \"178a27ab-0bb7-4a69-ad49-57ed121ce165\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.055345 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.122190 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.133401 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.215810 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.243434 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.248765 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.272127 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d099c037-9022-46df-8a66-e2856ee9dbd9-catalog-content\") pod \"d099c037-9022-46df-8a66-e2856ee9dbd9\" (UID: \"d099c037-9022-46df-8a66-e2856ee9dbd9\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.272179 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2nwh\" (UniqueName: \"kubernetes.io/projected/d099c037-9022-46df-8a66-e2856ee9dbd9-kube-api-access-k2nwh\") pod \"d099c037-9022-46df-8a66-e2856ee9dbd9\" (UID: \"d099c037-9022-46df-8a66-e2856ee9dbd9\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.272221 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6t6k\" (UniqueName: \"kubernetes.io/projected/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-kube-api-access-w6t6k\") pod \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\" (UID: \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.272316 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-catalog-content\") pod \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\" (UID: \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.272371 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d099c037-9022-46df-8a66-e2856ee9dbd9-utilities\") pod \"d099c037-9022-46df-8a66-e2856ee9dbd9\" (UID: \"d099c037-9022-46df-8a66-e2856ee9dbd9\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.272423 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-utilities\") pod \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\" (UID: \"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.274014 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-utilities" (OuterVolumeSpecName: "utilities") pod "e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" (UID: "e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.278101 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d099c037-9022-46df-8a66-e2856ee9dbd9-utilities" (OuterVolumeSpecName: "utilities") pod "d099c037-9022-46df-8a66-e2856ee9dbd9" (UID: "d099c037-9022-46df-8a66-e2856ee9dbd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.294982 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d099c037-9022-46df-8a66-e2856ee9dbd9-kube-api-access-k2nwh" (OuterVolumeSpecName: "kube-api-access-k2nwh") pod "d099c037-9022-46df-8a66-e2856ee9dbd9" (UID: "d099c037-9022-46df-8a66-e2856ee9dbd9"). InnerVolumeSpecName "kube-api-access-k2nwh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.295226 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-kube-api-access-w6t6k" (OuterVolumeSpecName: "kube-api-access-w6t6k") pod "e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" (UID: "e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7"). InnerVolumeSpecName "kube-api-access-w6t6k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.307778 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d099c037-9022-46df-8a66-e2856ee9dbd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d099c037-9022-46df-8a66-e2856ee9dbd9" (UID: "d099c037-9022-46df-8a66-e2856ee9dbd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.330940 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" (UID: "e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.373581 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-marketplace-operator-metrics\") pod \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.373714 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2009-8141-4816-b0d1-350eaee192ef-utilities\") pod \"0c5d2009-8141-4816-b0d1-350eaee192ef\" (UID: \"0c5d2009-8141-4816-b0d1-350eaee192ef\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.373823 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-tmp\") pod \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.373864 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b263fe-d08c-46d3-ba73-30ad3e8deec1-utilities\") pod \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\" (UID: \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.373898 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-marketplace-trusted-ca\") pod \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.373972 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn6xz\" (UniqueName: \"kubernetes.io/projected/0c5d2009-8141-4816-b0d1-350eaee192ef-kube-api-access-xn6xz\") pod \"0c5d2009-8141-4816-b0d1-350eaee192ef\" (UID: \"0c5d2009-8141-4816-b0d1-350eaee192ef\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.374009 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b263fe-d08c-46d3-ba73-30ad3e8deec1-catalog-content\") pod \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\" (UID: \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.374072 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cftsn\" (UniqueName: \"kubernetes.io/projected/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-kube-api-access-cftsn\") pod \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\" (UID: \"3adc63ca-ac54-461a-9a91-10ba0b85fa2b\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.374129 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztpx9\" (UniqueName: \"kubernetes.io/projected/39b263fe-d08c-46d3-ba73-30ad3e8deec1-kube-api-access-ztpx9\") pod \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\" (UID: \"39b263fe-d08c-46d3-ba73-30ad3e8deec1\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.374162 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2009-8141-4816-b0d1-350eaee192ef-catalog-content\") pod \"0c5d2009-8141-4816-b0d1-350eaee192ef\" (UID: \"0c5d2009-8141-4816-b0d1-350eaee192ef\") " Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.375536 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c5d2009-8141-4816-b0d1-350eaee192ef-utilities" (OuterVolumeSpecName: "utilities") pod "0c5d2009-8141-4816-b0d1-350eaee192ef" (UID: "0c5d2009-8141-4816-b0d1-350eaee192ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.375768 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.375865 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2009-8141-4816-b0d1-350eaee192ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.375930 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d099c037-9022-46df-8a66-e2856ee9dbd9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.375998 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k2nwh\" (UniqueName: \"kubernetes.io/projected/d099c037-9022-46df-8a66-e2856ee9dbd9-kube-api-access-k2nwh\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.376054 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w6t6k\" (UniqueName: \"kubernetes.io/projected/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-kube-api-access-w6t6k\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.376111 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.376168 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d099c037-9022-46df-8a66-e2856ee9dbd9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.377025 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3adc63ca-ac54-461a-9a91-10ba0b85fa2b" (UID: "3adc63ca-ac54-461a-9a91-10ba0b85fa2b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.378037 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-kube-api-access-cftsn" (OuterVolumeSpecName: "kube-api-access-cftsn") pod "3adc63ca-ac54-461a-9a91-10ba0b85fa2b" (UID: "3adc63ca-ac54-461a-9a91-10ba0b85fa2b"). InnerVolumeSpecName "kube-api-access-cftsn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.379611 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5d2009-8141-4816-b0d1-350eaee192ef-kube-api-access-xn6xz" (OuterVolumeSpecName: "kube-api-access-xn6xz") pod "0c5d2009-8141-4816-b0d1-350eaee192ef" (UID: "0c5d2009-8141-4816-b0d1-350eaee192ef"). InnerVolumeSpecName "kube-api-access-xn6xz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.380023 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-tmp" (OuterVolumeSpecName: "tmp") pod "3adc63ca-ac54-461a-9a91-10ba0b85fa2b" (UID: "3adc63ca-ac54-461a-9a91-10ba0b85fa2b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.383248 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b263fe-d08c-46d3-ba73-30ad3e8deec1-kube-api-access-ztpx9" (OuterVolumeSpecName: "kube-api-access-ztpx9") pod "39b263fe-d08c-46d3-ba73-30ad3e8deec1" (UID: "39b263fe-d08c-46d3-ba73-30ad3e8deec1"). InnerVolumeSpecName "kube-api-access-ztpx9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.383318 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b263fe-d08c-46d3-ba73-30ad3e8deec1-utilities" (OuterVolumeSpecName: "utilities") pod "39b263fe-d08c-46d3-ba73-30ad3e8deec1" (UID: "39b263fe-d08c-46d3-ba73-30ad3e8deec1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.387326 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3adc63ca-ac54-461a-9a91-10ba0b85fa2b" (UID: "3adc63ca-ac54-461a-9a91-10ba0b85fa2b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.388085 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c5d2009-8141-4816-b0d1-350eaee192ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c5d2009-8141-4816-b0d1-350eaee192ef" (UID: "0c5d2009-8141-4816-b0d1-350eaee192ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.474613 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b263fe-d08c-46d3-ba73-30ad3e8deec1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39b263fe-d08c-46d3-ba73-30ad3e8deec1" (UID: "39b263fe-d08c-46d3-ba73-30ad3e8deec1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.477468 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cftsn\" (UniqueName: \"kubernetes.io/projected/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-kube-api-access-cftsn\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.477511 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ztpx9\" (UniqueName: \"kubernetes.io/projected/39b263fe-d08c-46d3-ba73-30ad3e8deec1-kube-api-access-ztpx9\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.477526 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c5d2009-8141-4816-b0d1-350eaee192ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.477538 5110 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.477550 5110 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.477564 5110 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39b263fe-d08c-46d3-ba73-30ad3e8deec1-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.477575 5110 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3adc63ca-ac54-461a-9a91-10ba0b85fa2b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.477587 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xn6xz\" (UniqueName: \"kubernetes.io/projected/0c5d2009-8141-4816-b0d1-350eaee192ef-kube-api-access-xn6xz\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.477597 5110 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39b263fe-d08c-46d3-ba73-30ad3e8deec1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.553265 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bnws7"] Jan 22 14:21:11 crc kubenswrapper[5110]: W0122 14:21:11.556963 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod178a27ab_0bb7_4a69_ad49_57ed121ce165.slice/crio-a2dcb66c8cf5369a96c33235e50484ecf6d4566ee76ec5967d4de433f1dd5460 WatchSource:0}: Error finding container a2dcb66c8cf5369a96c33235e50484ecf6d4566ee76ec5967d4de433f1dd5460: Status 404 returned error can't find the container with id a2dcb66c8cf5369a96c33235e50484ecf6d4566ee76ec5967d4de433f1dd5460 Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.837725 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" event={"ID":"178a27ab-0bb7-4a69-ad49-57ed121ce165","Type":"ContainerStarted","Data":"7d9102279621551aa0965dd696ab6f74af2078fa616d32e239551005124239b3"} Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.838069 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" event={"ID":"178a27ab-0bb7-4a69-ad49-57ed121ce165","Type":"ContainerStarted","Data":"a2dcb66c8cf5369a96c33235e50484ecf6d4566ee76ec5967d4de433f1dd5460"} Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.838440 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.840211 5110 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-bnws7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.840337 5110 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" podUID="178a27ab-0bb7-4a69-ad49-57ed121ce165" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.844116 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hxzv2" event={"ID":"d099c037-9022-46df-8a66-e2856ee9dbd9","Type":"ContainerDied","Data":"2b025fbd8fea60ff561b1a8310849e98f2899c01e017eb98563e2ba254819f21"} Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.844213 5110 scope.go:117] "RemoveContainer" containerID="2a3585376f58dfda80a1c499c20d35ff254bcc810248cf3f4818892320a95885" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.844792 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hxzv2" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.850908 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.850909 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-74rfw" event={"ID":"3adc63ca-ac54-461a-9a91-10ba0b85fa2b","Type":"ContainerDied","Data":"c6ce80a45875b11a4779155da6a4d4afd657356c97fb5ee2e766703d40ba6928"} Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.861840 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6z98k" event={"ID":"e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7","Type":"ContainerDied","Data":"acf6aafbe23ef261df8f336de8015dd816f051a483c716c7be4047c618161c95"} Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.861987 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6z98k" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.864780 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" podStartSLOduration=1.8647539640000002 podStartE2EDuration="1.864753964s" podCreationTimestamp="2026-01-22 14:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:21:11.857948763 +0000 UTC m=+352.080033132" watchObservedRunningTime="2026-01-22 14:21:11.864753964 +0000 UTC m=+352.086838323" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.867731 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xk6fr" event={"ID":"0c5d2009-8141-4816-b0d1-350eaee192ef","Type":"ContainerDied","Data":"98fe0a7d55c41ed49d42e70404f635109a7d01ae183d1c47ec9d8c7ae25e85c3"} Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.867921 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xk6fr" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.882739 5110 scope.go:117] "RemoveContainer" containerID="2a1b4f216ea26c5181aedf4d36e52d9701d7b6eb69a3850e09217bf1ac8f695d" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.884371 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwt5t" event={"ID":"39b263fe-d08c-46d3-ba73-30ad3e8deec1","Type":"ContainerDied","Data":"d2087f1e44b69397528ea62501acc00e4cc7b140643d3dd6a4b423392ef7536a"} Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.884543 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwt5t" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.907983 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hxzv2"] Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.912784 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hxzv2"] Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.924050 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-74rfw"] Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.924860 5110 scope.go:117] "RemoveContainer" containerID="515a7907df2eb4c829879d15bad44609ed7c3871e29fa7b71c78271a206e1091" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.932340 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-74rfw"] Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.948831 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk6fr"] Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.954994 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xk6fr"] Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.955685 5110 scope.go:117] "RemoveContainer" containerID="3dc86dc50f5848a37e24e1fa27b22f8abc3b2972b4837206e372864b64b8e0a2" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.958928 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wwt5t"] Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.961134 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wwt5t"] Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.971487 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6z98k"] Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.974851 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6z98k"] Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.977984 5110 scope.go:117] "RemoveContainer" containerID="919e3d2a8af0cde2a617c7092598cb61e1cec71cdb1e3a7a3a1d2cf9a4eef276" Jan 22 14:21:11 crc kubenswrapper[5110]: I0122 14:21:11.991490 5110 scope.go:117] "RemoveContainer" containerID="1b3a9651196fb50cdf2ee2d0efbd419ceef76ae5900cfd11e9c9a68366068950" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.015180 5110 scope.go:117] "RemoveContainer" containerID="7111ab1340e5e3aff066db4d409b8ded63457cc7666e6f78fc6b8b7f3802ec04" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.035204 5110 scope.go:117] "RemoveContainer" containerID="58c73875a2558fceeeca0cfff3502270af75292ff4fdb8d459bf5052873448e8" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.055060 5110 scope.go:117] "RemoveContainer" containerID="1e3f8b7ee1cc818bc9b40e013850ba1a6381589dec276d1c9ce553a43399eb17" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.071220 5110 scope.go:117] "RemoveContainer" containerID="5242cc1f572a5eff9a10abc411fe69f45c879e72512d1af899e61b6fb7a448f7" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.089250 5110 scope.go:117] "RemoveContainer" containerID="938e023755b1d0b6046a3ce5116f8dfcb783679ba287bf10d9661e111e0a2927" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.115158 5110 scope.go:117] "RemoveContainer" containerID="a7044646b488586b273e861c198848ec5cba3559e937f0ae37945da970f05c1a" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.134558 5110 scope.go:117] "RemoveContainer" containerID="4ec51597be03823ca75541b8bb490f7914fddc51fe0b4d99df49b91874f32b3e" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.280852 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c5d2009-8141-4816-b0d1-350eaee192ef" path="/var/lib/kubelet/pods/0c5d2009-8141-4816-b0d1-350eaee192ef/volumes" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.281448 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b263fe-d08c-46d3-ba73-30ad3e8deec1" path="/var/lib/kubelet/pods/39b263fe-d08c-46d3-ba73-30ad3e8deec1/volumes" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.282109 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" path="/var/lib/kubelet/pods/3adc63ca-ac54-461a-9a91-10ba0b85fa2b/volumes" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.282987 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d099c037-9022-46df-8a66-e2856ee9dbd9" path="/var/lib/kubelet/pods/d099c037-9022-46df-8a66-e2856ee9dbd9/volumes" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.283545 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" path="/var/lib/kubelet/pods/e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7/volumes" Jan 22 14:21:12 crc kubenswrapper[5110]: I0122 14:21:12.901234 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-bnws7" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350042 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5w4lm"] Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350544 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" containerName="extract-utilities" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350556 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" containerName="extract-utilities" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350565 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" containerName="extract-content" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350571 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" containerName="extract-content" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350577 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39b263fe-d08c-46d3-ba73-30ad3e8deec1" containerName="extract-content" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350586 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b263fe-d08c-46d3-ba73-30ad3e8deec1" containerName="extract-content" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350593 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c5d2009-8141-4816-b0d1-350eaee192ef" containerName="extract-content" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350598 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5d2009-8141-4816-b0d1-350eaee192ef" containerName="extract-content" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350610 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" containerName="marketplace-operator" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350631 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" containerName="marketplace-operator" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350643 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d099c037-9022-46df-8a66-e2856ee9dbd9" containerName="extract-content" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350649 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d099c037-9022-46df-8a66-e2856ee9dbd9" containerName="extract-content" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350660 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d099c037-9022-46df-8a66-e2856ee9dbd9" containerName="extract-utilities" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350667 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d099c037-9022-46df-8a66-e2856ee9dbd9" containerName="extract-utilities" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350673 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d099c037-9022-46df-8a66-e2856ee9dbd9" containerName="registry-server" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350678 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="d099c037-9022-46df-8a66-e2856ee9dbd9" containerName="registry-server" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350685 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39b263fe-d08c-46d3-ba73-30ad3e8deec1" containerName="extract-utilities" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350690 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b263fe-d08c-46d3-ba73-30ad3e8deec1" containerName="extract-utilities" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350699 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="39b263fe-d08c-46d3-ba73-30ad3e8deec1" containerName="registry-server" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350704 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b263fe-d08c-46d3-ba73-30ad3e8deec1" containerName="registry-server" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350714 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" containerName="registry-server" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350719 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" containerName="registry-server" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350726 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c5d2009-8141-4816-b0d1-350eaee192ef" containerName="extract-utilities" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350731 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5d2009-8141-4816-b0d1-350eaee192ef" containerName="extract-utilities" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350736 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0c5d2009-8141-4816-b0d1-350eaee192ef" containerName="registry-server" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350741 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5d2009-8141-4816-b0d1-350eaee192ef" containerName="registry-server" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350750 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" containerName="marketplace-operator" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350756 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" containerName="marketplace-operator" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350866 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="d099c037-9022-46df-8a66-e2856ee9dbd9" containerName="registry-server" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350881 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="39b263fe-d08c-46d3-ba73-30ad3e8deec1" containerName="registry-server" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350891 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" containerName="marketplace-operator" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350901 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="e19a5f1e-d63d-47ef-bfd6-5297b60f2fd7" containerName="registry-server" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.350912 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="0c5d2009-8141-4816-b0d1-350eaee192ef" containerName="registry-server" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.351104 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="3adc63ca-ac54-461a-9a91-10ba0b85fa2b" containerName="marketplace-operator" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.360579 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.364052 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5w4lm"] Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.365068 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.502344 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58fw9\" (UniqueName: \"kubernetes.io/projected/069f9533-2cc7-4412-b07f-699b0619d297-kube-api-access-58fw9\") pod \"redhat-operators-5w4lm\" (UID: \"069f9533-2cc7-4412-b07f-699b0619d297\") " pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.502490 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069f9533-2cc7-4412-b07f-699b0619d297-utilities\") pod \"redhat-operators-5w4lm\" (UID: \"069f9533-2cc7-4412-b07f-699b0619d297\") " pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.502540 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069f9533-2cc7-4412-b07f-699b0619d297-catalog-content\") pod \"redhat-operators-5w4lm\" (UID: \"069f9533-2cc7-4412-b07f-699b0619d297\") " pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.604032 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-58fw9\" (UniqueName: \"kubernetes.io/projected/069f9533-2cc7-4412-b07f-699b0619d297-kube-api-access-58fw9\") pod \"redhat-operators-5w4lm\" (UID: \"069f9533-2cc7-4412-b07f-699b0619d297\") " pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.604142 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069f9533-2cc7-4412-b07f-699b0619d297-utilities\") pod \"redhat-operators-5w4lm\" (UID: \"069f9533-2cc7-4412-b07f-699b0619d297\") " pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.604184 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069f9533-2cc7-4412-b07f-699b0619d297-catalog-content\") pod \"redhat-operators-5w4lm\" (UID: \"069f9533-2cc7-4412-b07f-699b0619d297\") " pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.604616 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069f9533-2cc7-4412-b07f-699b0619d297-catalog-content\") pod \"redhat-operators-5w4lm\" (UID: \"069f9533-2cc7-4412-b07f-699b0619d297\") " pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.605042 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069f9533-2cc7-4412-b07f-699b0619d297-utilities\") pod \"redhat-operators-5w4lm\" (UID: \"069f9533-2cc7-4412-b07f-699b0619d297\") " pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.622868 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-58fw9\" (UniqueName: \"kubernetes.io/projected/069f9533-2cc7-4412-b07f-699b0619d297-kube-api-access-58fw9\") pod \"redhat-operators-5w4lm\" (UID: \"069f9533-2cc7-4412-b07f-699b0619d297\") " pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:13 crc kubenswrapper[5110]: I0122 14:21:13.684882 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.091092 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5w4lm"] Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.360280 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xkl4t"] Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.416993 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xkl4t"] Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.417179 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.420126 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.515231 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggqvv\" (UniqueName: \"kubernetes.io/projected/4c850706-f8e9-49b7-9cb0-3c5253802b67-kube-api-access-ggqvv\") pod \"certified-operators-xkl4t\" (UID: \"4c850706-f8e9-49b7-9cb0-3c5253802b67\") " pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.515319 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c850706-f8e9-49b7-9cb0-3c5253802b67-catalog-content\") pod \"certified-operators-xkl4t\" (UID: \"4c850706-f8e9-49b7-9cb0-3c5253802b67\") " pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.515350 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c850706-f8e9-49b7-9cb0-3c5253802b67-utilities\") pod \"certified-operators-xkl4t\" (UID: \"4c850706-f8e9-49b7-9cb0-3c5253802b67\") " pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.616972 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ggqvv\" (UniqueName: \"kubernetes.io/projected/4c850706-f8e9-49b7-9cb0-3c5253802b67-kube-api-access-ggqvv\") pod \"certified-operators-xkl4t\" (UID: \"4c850706-f8e9-49b7-9cb0-3c5253802b67\") " pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.617596 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c850706-f8e9-49b7-9cb0-3c5253802b67-catalog-content\") pod \"certified-operators-xkl4t\" (UID: \"4c850706-f8e9-49b7-9cb0-3c5253802b67\") " pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.617757 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c850706-f8e9-49b7-9cb0-3c5253802b67-utilities\") pod \"certified-operators-xkl4t\" (UID: \"4c850706-f8e9-49b7-9cb0-3c5253802b67\") " pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.618068 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c850706-f8e9-49b7-9cb0-3c5253802b67-catalog-content\") pod \"certified-operators-xkl4t\" (UID: \"4c850706-f8e9-49b7-9cb0-3c5253802b67\") " pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.618966 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c850706-f8e9-49b7-9cb0-3c5253802b67-utilities\") pod \"certified-operators-xkl4t\" (UID: \"4c850706-f8e9-49b7-9cb0-3c5253802b67\") " pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.648151 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggqvv\" (UniqueName: \"kubernetes.io/projected/4c850706-f8e9-49b7-9cb0-3c5253802b67-kube-api-access-ggqvv\") pod \"certified-operators-xkl4t\" (UID: \"4c850706-f8e9-49b7-9cb0-3c5253802b67\") " pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.734006 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.907516 5110 generic.go:358] "Generic (PLEG): container finished" podID="069f9533-2cc7-4412-b07f-699b0619d297" containerID="8d9776335788c80915333f5bc988baeb95b39076583762f1860888b81e16c240" exitCode=0 Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.907691 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5w4lm" event={"ID":"069f9533-2cc7-4412-b07f-699b0619d297","Type":"ContainerDied","Data":"8d9776335788c80915333f5bc988baeb95b39076583762f1860888b81e16c240"} Jan 22 14:21:14 crc kubenswrapper[5110]: I0122 14:21:14.907723 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5w4lm" event={"ID":"069f9533-2cc7-4412-b07f-699b0619d297","Type":"ContainerStarted","Data":"20013d4c6e045dc4792252f52dc826a82daa374c106b85e4e6bb846f9d969960"} Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.126433 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xkl4t"] Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.491824 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-vp4hk"] Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.498065 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.506759 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-vp4hk"] Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.643145 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/eb2c9796-697c-404d-a49d-8dcb7e09edb5-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.643203 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.643226 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb2c9796-697c-404d-a49d-8dcb7e09edb5-trusted-ca\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.643248 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/eb2c9796-697c-404d-a49d-8dcb7e09edb5-registry-certificates\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.643283 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/eb2c9796-697c-404d-a49d-8dcb7e09edb5-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.643305 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb2c9796-697c-404d-a49d-8dcb7e09edb5-bound-sa-token\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.643324 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9f42\" (UniqueName: \"kubernetes.io/projected/eb2c9796-697c-404d-a49d-8dcb7e09edb5-kube-api-access-x9f42\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.643353 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/eb2c9796-697c-404d-a49d-8dcb7e09edb5-registry-tls\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.674999 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.744845 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x9f42\" (UniqueName: \"kubernetes.io/projected/eb2c9796-697c-404d-a49d-8dcb7e09edb5-kube-api-access-x9f42\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.744929 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/eb2c9796-697c-404d-a49d-8dcb7e09edb5-registry-tls\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.745009 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/eb2c9796-697c-404d-a49d-8dcb7e09edb5-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.746123 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb2c9796-697c-404d-a49d-8dcb7e09edb5-trusted-ca\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.746194 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/eb2c9796-697c-404d-a49d-8dcb7e09edb5-registry-certificates\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.746306 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/eb2c9796-697c-404d-a49d-8dcb7e09edb5-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.746377 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb2c9796-697c-404d-a49d-8dcb7e09edb5-bound-sa-token\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.746896 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/eb2c9796-697c-404d-a49d-8dcb7e09edb5-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.747475 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb2c9796-697c-404d-a49d-8dcb7e09edb5-trusted-ca\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.747652 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/eb2c9796-697c-404d-a49d-8dcb7e09edb5-registry-certificates\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.754370 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/eb2c9796-697c-404d-a49d-8dcb7e09edb5-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.755338 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/eb2c9796-697c-404d-a49d-8dcb7e09edb5-registry-tls\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.762790 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb2c9796-697c-404d-a49d-8dcb7e09edb5-bound-sa-token\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.762855 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nvr2c"] Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.766907 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9f42\" (UniqueName: \"kubernetes.io/projected/eb2c9796-697c-404d-a49d-8dcb7e09edb5-kube-api-access-x9f42\") pod \"image-registry-5d9d95bf5b-vp4hk\" (UID: \"eb2c9796-697c-404d-a49d-8dcb7e09edb5\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.771123 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nvr2c"] Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.771249 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.773681 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.827545 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.847008 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f1c8024-920c-4c5d-80bf-b6a39a06e3d5-utilities\") pod \"community-operators-nvr2c\" (UID: \"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5\") " pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.847277 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f1c8024-920c-4c5d-80bf-b6a39a06e3d5-catalog-content\") pod \"community-operators-nvr2c\" (UID: \"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5\") " pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.847396 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbdvl\" (UniqueName: \"kubernetes.io/projected/9f1c8024-920c-4c5d-80bf-b6a39a06e3d5-kube-api-access-lbdvl\") pod \"community-operators-nvr2c\" (UID: \"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5\") " pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.923446 5110 generic.go:358] "Generic (PLEG): container finished" podID="4c850706-f8e9-49b7-9cb0-3c5253802b67" containerID="e4123707c55e8c304261afeacde8307cda9fa902fb0205d6cad25bcf2014b8ce" exitCode=0 Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.923773 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkl4t" event={"ID":"4c850706-f8e9-49b7-9cb0-3c5253802b67","Type":"ContainerDied","Data":"e4123707c55e8c304261afeacde8307cda9fa902fb0205d6cad25bcf2014b8ce"} Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.923832 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkl4t" event={"ID":"4c850706-f8e9-49b7-9cb0-3c5253802b67","Type":"ContainerStarted","Data":"b0680a4abf34838df2cce8b060dc934621950b4cbf0fefee904e2320b3cebf49"} Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.948479 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f1c8024-920c-4c5d-80bf-b6a39a06e3d5-utilities\") pod \"community-operators-nvr2c\" (UID: \"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5\") " pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.948537 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f1c8024-920c-4c5d-80bf-b6a39a06e3d5-catalog-content\") pod \"community-operators-nvr2c\" (UID: \"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5\") " pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.948573 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lbdvl\" (UniqueName: \"kubernetes.io/projected/9f1c8024-920c-4c5d-80bf-b6a39a06e3d5-kube-api-access-lbdvl\") pod \"community-operators-nvr2c\" (UID: \"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5\") " pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.949331 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f1c8024-920c-4c5d-80bf-b6a39a06e3d5-catalog-content\") pod \"community-operators-nvr2c\" (UID: \"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5\") " pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.949517 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f1c8024-920c-4c5d-80bf-b6a39a06e3d5-utilities\") pod \"community-operators-nvr2c\" (UID: \"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5\") " pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:15 crc kubenswrapper[5110]: I0122 14:21:15.966815 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbdvl\" (UniqueName: \"kubernetes.io/projected/9f1c8024-920c-4c5d-80bf-b6a39a06e3d5-kube-api-access-lbdvl\") pod \"community-operators-nvr2c\" (UID: \"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5\") " pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.095333 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.233551 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-vp4hk"] Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.486906 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nvr2c"] Jan 22 14:21:16 crc kubenswrapper[5110]: W0122 14:21:16.543142 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f1c8024_920c_4c5d_80bf_b6a39a06e3d5.slice/crio-84308fe620aca8b5262acccea53c21a57e1a8959dbf9eaffa4cb113653f4aeee WatchSource:0}: Error finding container 84308fe620aca8b5262acccea53c21a57e1a8959dbf9eaffa4cb113653f4aeee: Status 404 returned error can't find the container with id 84308fe620aca8b5262acccea53c21a57e1a8959dbf9eaffa4cb113653f4aeee Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.761233 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h89wg"] Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.775256 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h89wg"] Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.775480 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.777930 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.861119 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42269a6c-9149-472c-86ec-ac66fec57a7c-utilities\") pod \"redhat-marketplace-h89wg\" (UID: \"42269a6c-9149-472c-86ec-ac66fec57a7c\") " pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.861214 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42269a6c-9149-472c-86ec-ac66fec57a7c-catalog-content\") pod \"redhat-marketplace-h89wg\" (UID: \"42269a6c-9149-472c-86ec-ac66fec57a7c\") " pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.861242 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57nh\" (UniqueName: \"kubernetes.io/projected/42269a6c-9149-472c-86ec-ac66fec57a7c-kube-api-access-v57nh\") pod \"redhat-marketplace-h89wg\" (UID: \"42269a6c-9149-472c-86ec-ac66fec57a7c\") " pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.931125 5110 generic.go:358] "Generic (PLEG): container finished" podID="069f9533-2cc7-4412-b07f-699b0619d297" containerID="7df9de077ab05776bca2e29a4962406cdd07d85e38e906b7bbf3ef0e058a3b7b" exitCode=0 Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.931225 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5w4lm" event={"ID":"069f9533-2cc7-4412-b07f-699b0619d297","Type":"ContainerDied","Data":"7df9de077ab05776bca2e29a4962406cdd07d85e38e906b7bbf3ef0e058a3b7b"} Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.936525 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" event={"ID":"eb2c9796-697c-404d-a49d-8dcb7e09edb5","Type":"ContainerStarted","Data":"a0de3850c91a72cc04dba9ea23e458b08f365b287bbded2ff70dda4063223dd3"} Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.936592 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" event={"ID":"eb2c9796-697c-404d-a49d-8dcb7e09edb5","Type":"ContainerStarted","Data":"236d1a4e8bd20878002b72116a826dadc6b8c59844da77f94c76f576f2c6aef5"} Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.938112 5110 generic.go:358] "Generic (PLEG): container finished" podID="9f1c8024-920c-4c5d-80bf-b6a39a06e3d5" containerID="1dd0ca572bc19dd75ae845ba7dd18f7ec1c909e221f14bda3999799f0a39f95c" exitCode=0 Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.938411 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvr2c" event={"ID":"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5","Type":"ContainerDied","Data":"1dd0ca572bc19dd75ae845ba7dd18f7ec1c909e221f14bda3999799f0a39f95c"} Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.938487 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvr2c" event={"ID":"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5","Type":"ContainerStarted","Data":"84308fe620aca8b5262acccea53c21a57e1a8959dbf9eaffa4cb113653f4aeee"} Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.940892 5110 generic.go:358] "Generic (PLEG): container finished" podID="4c850706-f8e9-49b7-9cb0-3c5253802b67" containerID="7c24dd37fe3000d703807a4a5157bcc52c86ce101885c111031a2d5e67c05e9b" exitCode=0 Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.940927 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkl4t" event={"ID":"4c850706-f8e9-49b7-9cb0-3c5253802b67","Type":"ContainerDied","Data":"7c24dd37fe3000d703807a4a5157bcc52c86ce101885c111031a2d5e67c05e9b"} Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.962865 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42269a6c-9149-472c-86ec-ac66fec57a7c-catalog-content\") pod \"redhat-marketplace-h89wg\" (UID: \"42269a6c-9149-472c-86ec-ac66fec57a7c\") " pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.962938 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v57nh\" (UniqueName: \"kubernetes.io/projected/42269a6c-9149-472c-86ec-ac66fec57a7c-kube-api-access-v57nh\") pod \"redhat-marketplace-h89wg\" (UID: \"42269a6c-9149-472c-86ec-ac66fec57a7c\") " pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.963037 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42269a6c-9149-472c-86ec-ac66fec57a7c-utilities\") pod \"redhat-marketplace-h89wg\" (UID: \"42269a6c-9149-472c-86ec-ac66fec57a7c\") " pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.963755 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42269a6c-9149-472c-86ec-ac66fec57a7c-utilities\") pod \"redhat-marketplace-h89wg\" (UID: \"42269a6c-9149-472c-86ec-ac66fec57a7c\") " pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:16 crc kubenswrapper[5110]: I0122 14:21:16.963767 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42269a6c-9149-472c-86ec-ac66fec57a7c-catalog-content\") pod \"redhat-marketplace-h89wg\" (UID: \"42269a6c-9149-472c-86ec-ac66fec57a7c\") " pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.027509 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v57nh\" (UniqueName: \"kubernetes.io/projected/42269a6c-9149-472c-86ec-ac66fec57a7c-kube-api-access-v57nh\") pod \"redhat-marketplace-h89wg\" (UID: \"42269a6c-9149-472c-86ec-ac66fec57a7c\") " pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.050328 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" podStartSLOduration=2.0503139 podStartE2EDuration="2.0503139s" podCreationTimestamp="2026-01-22 14:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:21:17.015423884 +0000 UTC m=+357.237508253" watchObservedRunningTime="2026-01-22 14:21:17.0503139 +0000 UTC m=+357.272398259" Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.195264 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.595104 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h89wg"] Jan 22 14:21:17 crc kubenswrapper[5110]: W0122 14:21:17.602681 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42269a6c_9149_472c_86ec_ac66fec57a7c.slice/crio-79080d2bad8aa4ca36131b127bb5e8b14a8025e6ba182c02e7454c58bcc71268 WatchSource:0}: Error finding container 79080d2bad8aa4ca36131b127bb5e8b14a8025e6ba182c02e7454c58bcc71268: Status 404 returned error can't find the container with id 79080d2bad8aa4ca36131b127bb5e8b14a8025e6ba182c02e7454c58bcc71268 Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.947874 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkl4t" event={"ID":"4c850706-f8e9-49b7-9cb0-3c5253802b67","Type":"ContainerStarted","Data":"7681afc39a5a6ecb76ac95d498c0771c203c08713fb707a85872e7950c7b89ab"} Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.949121 5110 generic.go:358] "Generic (PLEG): container finished" podID="42269a6c-9149-472c-86ec-ac66fec57a7c" containerID="4b0262a61b477bb71a4fad6936a91fbb2e05af1d059a4adc3fca3c0a9220a7e1" exitCode=0 Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.949191 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h89wg" event={"ID":"42269a6c-9149-472c-86ec-ac66fec57a7c","Type":"ContainerDied","Data":"4b0262a61b477bb71a4fad6936a91fbb2e05af1d059a4adc3fca3c0a9220a7e1"} Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.949221 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h89wg" event={"ID":"42269a6c-9149-472c-86ec-ac66fec57a7c","Type":"ContainerStarted","Data":"79080d2bad8aa4ca36131b127bb5e8b14a8025e6ba182c02e7454c58bcc71268"} Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.953040 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5w4lm" event={"ID":"069f9533-2cc7-4412-b07f-699b0619d297","Type":"ContainerStarted","Data":"c003fb720d1dc611a13719ac7d8fc40610a9e789de873bf8c32ece6f4e27318b"} Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.954733 5110 generic.go:358] "Generic (PLEG): container finished" podID="9f1c8024-920c-4c5d-80bf-b6a39a06e3d5" containerID="1f2a4b030b17755ca894a0a98772ecbc3c126535d6f5eb1ffba1440d2251f0d9" exitCode=0 Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.954803 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvr2c" event={"ID":"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5","Type":"ContainerDied","Data":"1f2a4b030b17755ca894a0a98772ecbc3c126535d6f5eb1ffba1440d2251f0d9"} Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.954874 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.973587 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xkl4t" podStartSLOduration=3.383453787 podStartE2EDuration="3.973573358s" podCreationTimestamp="2026-01-22 14:21:14 +0000 UTC" firstStartedPulling="2026-01-22 14:21:15.92511942 +0000 UTC m=+356.147203779" lastFinishedPulling="2026-01-22 14:21:16.515238981 +0000 UTC m=+356.737323350" observedRunningTime="2026-01-22 14:21:17.968693839 +0000 UTC m=+358.190778208" watchObservedRunningTime="2026-01-22 14:21:17.973573358 +0000 UTC m=+358.195657717" Jan 22 14:21:17 crc kubenswrapper[5110]: I0122 14:21:17.990976 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5w4lm" podStartSLOduration=4.030155046 podStartE2EDuration="4.99096296s" podCreationTimestamp="2026-01-22 14:21:13 +0000 UTC" firstStartedPulling="2026-01-22 14:21:14.908525734 +0000 UTC m=+355.130610093" lastFinishedPulling="2026-01-22 14:21:15.869333638 +0000 UTC m=+356.091418007" observedRunningTime="2026-01-22 14:21:17.989017409 +0000 UTC m=+358.211101778" watchObservedRunningTime="2026-01-22 14:21:17.99096296 +0000 UTC m=+358.213047319" Jan 22 14:21:18 crc kubenswrapper[5110]: I0122 14:21:18.972140 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h89wg" event={"ID":"42269a6c-9149-472c-86ec-ac66fec57a7c","Type":"ContainerStarted","Data":"b16d0cc02abb510c97478d64f501140b42619c01c89fd92583131b5410a0b5db"} Jan 22 14:21:18 crc kubenswrapper[5110]: I0122 14:21:18.976559 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nvr2c" event={"ID":"9f1c8024-920c-4c5d-80bf-b6a39a06e3d5","Type":"ContainerStarted","Data":"e0c568c0f504e6507761de9d704636a96ccd1fdacb711f05bdea7da360502ad9"} Jan 22 14:21:19 crc kubenswrapper[5110]: I0122 14:21:19.014700 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nvr2c" podStartSLOduration=3.399015256 podStartE2EDuration="4.014680715s" podCreationTimestamp="2026-01-22 14:21:15 +0000 UTC" firstStartedPulling="2026-01-22 14:21:16.940599767 +0000 UTC m=+357.162684136" lastFinishedPulling="2026-01-22 14:21:17.556265236 +0000 UTC m=+357.778349595" observedRunningTime="2026-01-22 14:21:19.009897708 +0000 UTC m=+359.231982067" watchObservedRunningTime="2026-01-22 14:21:19.014680715 +0000 UTC m=+359.236765084" Jan 22 14:21:19 crc kubenswrapper[5110]: I0122 14:21:19.983668 5110 generic.go:358] "Generic (PLEG): container finished" podID="42269a6c-9149-472c-86ec-ac66fec57a7c" containerID="b16d0cc02abb510c97478d64f501140b42619c01c89fd92583131b5410a0b5db" exitCode=0 Jan 22 14:21:19 crc kubenswrapper[5110]: I0122 14:21:19.983777 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h89wg" event={"ID":"42269a6c-9149-472c-86ec-ac66fec57a7c","Type":"ContainerDied","Data":"b16d0cc02abb510c97478d64f501140b42619c01c89fd92583131b5410a0b5db"} Jan 22 14:21:20 crc kubenswrapper[5110]: I0122 14:21:20.991121 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h89wg" event={"ID":"42269a6c-9149-472c-86ec-ac66fec57a7c","Type":"ContainerStarted","Data":"8789a3651e9a64225c3f74ab20ab93234acf6a6b85924ba747530c64ec0f7f00"} Jan 22 14:21:21 crc kubenswrapper[5110]: I0122 14:21:21.009789 5110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h89wg" podStartSLOduration=4.157613358 podStartE2EDuration="5.009771947s" podCreationTimestamp="2026-01-22 14:21:16 +0000 UTC" firstStartedPulling="2026-01-22 14:21:17.949832108 +0000 UTC m=+358.171916467" lastFinishedPulling="2026-01-22 14:21:18.801990697 +0000 UTC m=+359.024075056" observedRunningTime="2026-01-22 14:21:21.006930371 +0000 UTC m=+361.229014760" watchObservedRunningTime="2026-01-22 14:21:21.009771947 +0000 UTC m=+361.231856316" Jan 22 14:21:23 crc kubenswrapper[5110]: I0122 14:21:23.685021 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:23 crc kubenswrapper[5110]: I0122 14:21:23.685403 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:23 crc kubenswrapper[5110]: I0122 14:21:23.735248 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:24 crc kubenswrapper[5110]: I0122 14:21:24.063121 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5w4lm" Jan 22 14:21:24 crc kubenswrapper[5110]: I0122 14:21:24.735183 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:24 crc kubenswrapper[5110]: I0122 14:21:24.735766 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:24 crc kubenswrapper[5110]: I0122 14:21:24.781382 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:25 crc kubenswrapper[5110]: I0122 14:21:25.081345 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xkl4t" Jan 22 14:21:26 crc kubenswrapper[5110]: I0122 14:21:26.105654 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:26 crc kubenswrapper[5110]: I0122 14:21:26.105736 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:26 crc kubenswrapper[5110]: I0122 14:21:26.142458 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:27 crc kubenswrapper[5110]: I0122 14:21:27.072478 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nvr2c" Jan 22 14:21:27 crc kubenswrapper[5110]: I0122 14:21:27.196438 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:27 crc kubenswrapper[5110]: I0122 14:21:27.196491 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:27 crc kubenswrapper[5110]: I0122 14:21:27.232138 5110 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:28 crc kubenswrapper[5110]: I0122 14:21:28.067693 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h89wg" Jan 22 14:21:38 crc kubenswrapper[5110]: I0122 14:21:38.983083 5110 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-vp4hk" Jan 22 14:21:39 crc kubenswrapper[5110]: I0122 14:21:39.046413 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-x4jp5"] Jan 22 14:21:49 crc kubenswrapper[5110]: I0122 14:21:49.692052 5110 patch_prober.go:28] interesting pod/machine-config-daemon-grf5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:21:49 crc kubenswrapper[5110]: I0122 14:21:49.692747 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podUID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:22:00 crc kubenswrapper[5110]: I0122 14:22:00.164648 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484862-l7x5v"] Jan 22 14:22:00 crc kubenswrapper[5110]: I0122 14:22:00.181942 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484862-l7x5v"] Jan 22 14:22:00 crc kubenswrapper[5110]: I0122 14:22:00.182100 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484862-l7x5v" Jan 22 14:22:00 crc kubenswrapper[5110]: I0122 14:22:00.184770 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 14:22:00 crc kubenswrapper[5110]: I0122 14:22:00.185058 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5nv5f\"" Jan 22 14:22:00 crc kubenswrapper[5110]: I0122 14:22:00.185266 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 14:22:00 crc kubenswrapper[5110]: I0122 14:22:00.196862 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phcfg\" (UniqueName: \"kubernetes.io/projected/8461ae1c-afe1-4b23-80f2-146c3b1b20d5-kube-api-access-phcfg\") pod \"auto-csr-approver-29484862-l7x5v\" (UID: \"8461ae1c-afe1-4b23-80f2-146c3b1b20d5\") " pod="openshift-infra/auto-csr-approver-29484862-l7x5v" Jan 22 14:22:00 crc kubenswrapper[5110]: I0122 14:22:00.298065 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-phcfg\" (UniqueName: \"kubernetes.io/projected/8461ae1c-afe1-4b23-80f2-146c3b1b20d5-kube-api-access-phcfg\") pod \"auto-csr-approver-29484862-l7x5v\" (UID: \"8461ae1c-afe1-4b23-80f2-146c3b1b20d5\") " pod="openshift-infra/auto-csr-approver-29484862-l7x5v" Jan 22 14:22:00 crc kubenswrapper[5110]: I0122 14:22:00.327399 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-phcfg\" (UniqueName: \"kubernetes.io/projected/8461ae1c-afe1-4b23-80f2-146c3b1b20d5-kube-api-access-phcfg\") pod \"auto-csr-approver-29484862-l7x5v\" (UID: \"8461ae1c-afe1-4b23-80f2-146c3b1b20d5\") " pod="openshift-infra/auto-csr-approver-29484862-l7x5v" Jan 22 14:22:00 crc kubenswrapper[5110]: I0122 14:22:00.499680 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484862-l7x5v" Jan 22 14:22:00 crc kubenswrapper[5110]: I0122 14:22:00.909116 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484862-l7x5v"] Jan 22 14:22:01 crc kubenswrapper[5110]: I0122 14:22:01.198680 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484862-l7x5v" event={"ID":"8461ae1c-afe1-4b23-80f2-146c3b1b20d5","Type":"ContainerStarted","Data":"718849169f8e8bb700278cd04b09434e0c2e47da0d04465ea136f38364b411ff"} Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.024359 5110 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-gbvq4" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.052664 5110 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-gbvq4" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.089490 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" podUID="595a4ab3-66a1-41ae-93e2-9476c1b14270" containerName="registry" containerID="cri-o://febfc102e4f0d153ee6aa7f7ca86db3f96a6b5b21de6b520b2fb4197a0e3f24f" gracePeriod=30 Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.214179 5110 generic.go:358] "Generic (PLEG): container finished" podID="595a4ab3-66a1-41ae-93e2-9476c1b14270" containerID="febfc102e4f0d153ee6aa7f7ca86db3f96a6b5b21de6b520b2fb4197a0e3f24f" exitCode=0 Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.214300 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" event={"ID":"595a4ab3-66a1-41ae-93e2-9476c1b14270","Type":"ContainerDied","Data":"febfc102e4f0d153ee6aa7f7ca86db3f96a6b5b21de6b520b2fb4197a0e3f24f"} Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.216316 5110 generic.go:358] "Generic (PLEG): container finished" podID="8461ae1c-afe1-4b23-80f2-146c3b1b20d5" containerID="fdca679b79d10bf0375766f9792c03b844caf754d24a86840674e5f9455d99fb" exitCode=0 Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.216407 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484862-l7x5v" event={"ID":"8461ae1c-afe1-4b23-80f2-146c3b1b20d5","Type":"ContainerDied","Data":"fdca679b79d10bf0375766f9792c03b844caf754d24a86840674e5f9455d99fb"} Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.478717 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.544561 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-registry-tls\") pod \"595a4ab3-66a1-41ae-93e2-9476c1b14270\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.544608 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-bound-sa-token\") pod \"595a4ab3-66a1-41ae-93e2-9476c1b14270\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.544678 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595a4ab3-66a1-41ae-93e2-9476c1b14270-trusted-ca\") pod \"595a4ab3-66a1-41ae-93e2-9476c1b14270\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.544703 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595a4ab3-66a1-41ae-93e2-9476c1b14270-ca-trust-extracted\") pod \"595a4ab3-66a1-41ae-93e2-9476c1b14270\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.544811 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"595a4ab3-66a1-41ae-93e2-9476c1b14270\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.544841 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595a4ab3-66a1-41ae-93e2-9476c1b14270-installation-pull-secrets\") pod \"595a4ab3-66a1-41ae-93e2-9476c1b14270\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.544870 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595a4ab3-66a1-41ae-93e2-9476c1b14270-registry-certificates\") pod \"595a4ab3-66a1-41ae-93e2-9476c1b14270\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.544888 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28t4b\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-kube-api-access-28t4b\") pod \"595a4ab3-66a1-41ae-93e2-9476c1b14270\" (UID: \"595a4ab3-66a1-41ae-93e2-9476c1b14270\") " Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.545768 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595a4ab3-66a1-41ae-93e2-9476c1b14270-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "595a4ab3-66a1-41ae-93e2-9476c1b14270" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.545829 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595a4ab3-66a1-41ae-93e2-9476c1b14270-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "595a4ab3-66a1-41ae-93e2-9476c1b14270" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.550714 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "595a4ab3-66a1-41ae-93e2-9476c1b14270" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.552297 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-kube-api-access-28t4b" (OuterVolumeSpecName: "kube-api-access-28t4b") pod "595a4ab3-66a1-41ae-93e2-9476c1b14270" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270"). InnerVolumeSpecName "kube-api-access-28t4b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.552796 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595a4ab3-66a1-41ae-93e2-9476c1b14270-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "595a4ab3-66a1-41ae-93e2-9476c1b14270" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.554421 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "595a4ab3-66a1-41ae-93e2-9476c1b14270" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.554833 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "595a4ab3-66a1-41ae-93e2-9476c1b14270" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.563364 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/595a4ab3-66a1-41ae-93e2-9476c1b14270-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "595a4ab3-66a1-41ae-93e2-9476c1b14270" (UID: "595a4ab3-66a1-41ae-93e2-9476c1b14270"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.646248 5110 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/595a4ab3-66a1-41ae-93e2-9476c1b14270-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.646278 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-28t4b\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-kube-api-access-28t4b\") on node \"crc\" DevicePath \"\"" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.646291 5110 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.646300 5110 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/595a4ab3-66a1-41ae-93e2-9476c1b14270-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.646309 5110 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/595a4ab3-66a1-41ae-93e2-9476c1b14270-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.646318 5110 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/595a4ab3-66a1-41ae-93e2-9476c1b14270-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 14:22:04 crc kubenswrapper[5110]: I0122 14:22:04.646327 5110 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/595a4ab3-66a1-41ae-93e2-9476c1b14270-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 14:22:05 crc kubenswrapper[5110]: I0122 14:22:05.053516 5110 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-21 14:17:04 +0000 UTC" deadline="2026-02-16 09:34:45.345329161 +0000 UTC" Jan 22 14:22:05 crc kubenswrapper[5110]: I0122 14:22:05.053857 5110 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="595h12m40.291478578s" Jan 22 14:22:05 crc kubenswrapper[5110]: I0122 14:22:05.235268 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" event={"ID":"595a4ab3-66a1-41ae-93e2-9476c1b14270","Type":"ContainerDied","Data":"2adf73532ed4b98f61985749fce330acd0907657d846235607a7f93d6c6ef873"} Jan 22 14:22:05 crc kubenswrapper[5110]: I0122 14:22:05.235353 5110 scope.go:117] "RemoveContainer" containerID="febfc102e4f0d153ee6aa7f7ca86db3f96a6b5b21de6b520b2fb4197a0e3f24f" Jan 22 14:22:05 crc kubenswrapper[5110]: I0122 14:22:05.235464 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-x4jp5" Jan 22 14:22:05 crc kubenswrapper[5110]: I0122 14:22:05.270738 5110 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-x4jp5"] Jan 22 14:22:05 crc kubenswrapper[5110]: I0122 14:22:05.276258 5110 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-x4jp5"] Jan 22 14:22:05 crc kubenswrapper[5110]: I0122 14:22:05.479132 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484862-l7x5v" Jan 22 14:22:05 crc kubenswrapper[5110]: I0122 14:22:05.556075 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phcfg\" (UniqueName: \"kubernetes.io/projected/8461ae1c-afe1-4b23-80f2-146c3b1b20d5-kube-api-access-phcfg\") pod \"8461ae1c-afe1-4b23-80f2-146c3b1b20d5\" (UID: \"8461ae1c-afe1-4b23-80f2-146c3b1b20d5\") " Jan 22 14:22:05 crc kubenswrapper[5110]: I0122 14:22:05.560998 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8461ae1c-afe1-4b23-80f2-146c3b1b20d5-kube-api-access-phcfg" (OuterVolumeSpecName: "kube-api-access-phcfg") pod "8461ae1c-afe1-4b23-80f2-146c3b1b20d5" (UID: "8461ae1c-afe1-4b23-80f2-146c3b1b20d5"). InnerVolumeSpecName "kube-api-access-phcfg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:22:05 crc kubenswrapper[5110]: I0122 14:22:05.657787 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-phcfg\" (UniqueName: \"kubernetes.io/projected/8461ae1c-afe1-4b23-80f2-146c3b1b20d5-kube-api-access-phcfg\") on node \"crc\" DevicePath \"\"" Jan 22 14:22:06 crc kubenswrapper[5110]: I0122 14:22:06.054713 5110 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-21 14:17:04 +0000 UTC" deadline="2026-02-14 17:00:54.320750312 +0000 UTC" Jan 22 14:22:06 crc kubenswrapper[5110]: I0122 14:22:06.054766 5110 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="554h38m48.265989351s" Jan 22 14:22:06 crc kubenswrapper[5110]: I0122 14:22:06.241913 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484862-l7x5v" Jan 22 14:22:06 crc kubenswrapper[5110]: I0122 14:22:06.241939 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484862-l7x5v" event={"ID":"8461ae1c-afe1-4b23-80f2-146c3b1b20d5","Type":"ContainerDied","Data":"718849169f8e8bb700278cd04b09434e0c2e47da0d04465ea136f38364b411ff"} Jan 22 14:22:06 crc kubenswrapper[5110]: I0122 14:22:06.242015 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="718849169f8e8bb700278cd04b09434e0c2e47da0d04465ea136f38364b411ff" Jan 22 14:22:06 crc kubenswrapper[5110]: I0122 14:22:06.280081 5110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="595a4ab3-66a1-41ae-93e2-9476c1b14270" path="/var/lib/kubelet/pods/595a4ab3-66a1-41ae-93e2-9476c1b14270/volumes" Jan 22 14:22:19 crc kubenswrapper[5110]: I0122 14:22:19.690987 5110 patch_prober.go:28] interesting pod/machine-config-daemon-grf5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:22:19 crc kubenswrapper[5110]: I0122 14:22:19.691427 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podUID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:22:49 crc kubenswrapper[5110]: I0122 14:22:49.692114 5110 patch_prober.go:28] interesting pod/machine-config-daemon-grf5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:22:49 crc kubenswrapper[5110]: I0122 14:22:49.692804 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podUID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:22:49 crc kubenswrapper[5110]: I0122 14:22:49.692890 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:22:49 crc kubenswrapper[5110]: I0122 14:22:49.694885 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d6656aaa510b1e729222f6ed0ef8ff67bc1783a0b2496049ceeafc8a653fe2b"} pod="openshift-machine-config-operator/machine-config-daemon-grf5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 14:22:49 crc kubenswrapper[5110]: I0122 14:22:49.695028 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podUID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerName="machine-config-daemon" containerID="cri-o://7d6656aaa510b1e729222f6ed0ef8ff67bc1783a0b2496049ceeafc8a653fe2b" gracePeriod=600 Jan 22 14:22:50 crc kubenswrapper[5110]: I0122 14:22:50.531702 5110 generic.go:358] "Generic (PLEG): container finished" podID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerID="7d6656aaa510b1e729222f6ed0ef8ff67bc1783a0b2496049ceeafc8a653fe2b" exitCode=0 Jan 22 14:22:50 crc kubenswrapper[5110]: I0122 14:22:50.531782 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" event={"ID":"6bfecfa4-ce38-4a92-a3dc-588176267b96","Type":"ContainerDied","Data":"7d6656aaa510b1e729222f6ed0ef8ff67bc1783a0b2496049ceeafc8a653fe2b"} Jan 22 14:22:50 crc kubenswrapper[5110]: I0122 14:22:50.532056 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" event={"ID":"6bfecfa4-ce38-4a92-a3dc-588176267b96","Type":"ContainerStarted","Data":"bc7a33bce5089b818da6fdc4dddcfbcc79a781b7c8d4a1fe9946249241a1f3f3"} Jan 22 14:22:50 crc kubenswrapper[5110]: I0122 14:22:50.532079 5110 scope.go:117] "RemoveContainer" containerID="6ecaf8dd09571a6f4b5924d1e4734e6a3c16b4eff3df4258d7238252038bcd11" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.148510 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484864-k8flm"] Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.149643 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="595a4ab3-66a1-41ae-93e2-9476c1b14270" containerName="registry" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.149656 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="595a4ab3-66a1-41ae-93e2-9476c1b14270" containerName="registry" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.149672 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8461ae1c-afe1-4b23-80f2-146c3b1b20d5" containerName="oc" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.149678 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="8461ae1c-afe1-4b23-80f2-146c3b1b20d5" containerName="oc" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.149777 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="8461ae1c-afe1-4b23-80f2-146c3b1b20d5" containerName="oc" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.149789 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="595a4ab3-66a1-41ae-93e2-9476c1b14270" containerName="registry" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.198950 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484864-k8flm"] Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.199143 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484864-k8flm" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.204180 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.204728 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.210442 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5nv5f\"" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.338777 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvjlp\" (UniqueName: \"kubernetes.io/projected/0a5039f3-cf50-4416-81c9-1f1f32fc872f-kube-api-access-xvjlp\") pod \"auto-csr-approver-29484864-k8flm\" (UID: \"0a5039f3-cf50-4416-81c9-1f1f32fc872f\") " pod="openshift-infra/auto-csr-approver-29484864-k8flm" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.439716 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xvjlp\" (UniqueName: \"kubernetes.io/projected/0a5039f3-cf50-4416-81c9-1f1f32fc872f-kube-api-access-xvjlp\") pod \"auto-csr-approver-29484864-k8flm\" (UID: \"0a5039f3-cf50-4416-81c9-1f1f32fc872f\") " pod="openshift-infra/auto-csr-approver-29484864-k8flm" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.469968 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvjlp\" (UniqueName: \"kubernetes.io/projected/0a5039f3-cf50-4416-81c9-1f1f32fc872f-kube-api-access-xvjlp\") pod \"auto-csr-approver-29484864-k8flm\" (UID: \"0a5039f3-cf50-4416-81c9-1f1f32fc872f\") " pod="openshift-infra/auto-csr-approver-29484864-k8flm" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.527655 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484864-k8flm" Jan 22 14:24:00 crc kubenswrapper[5110]: I0122 14:24:00.978192 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484864-k8flm"] Jan 22 14:24:00 crc kubenswrapper[5110]: W0122 14:24:00.993376 5110 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5039f3_cf50_4416_81c9_1f1f32fc872f.slice/crio-f4480088998e6fa0d0d2751c097f02d7835c33bd34afa642d78722570a33edf0 WatchSource:0}: Error finding container f4480088998e6fa0d0d2751c097f02d7835c33bd34afa642d78722570a33edf0: Status 404 returned error can't find the container with id f4480088998e6fa0d0d2751c097f02d7835c33bd34afa642d78722570a33edf0 Jan 22 14:24:01 crc kubenswrapper[5110]: I0122 14:24:01.945484 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484864-k8flm" event={"ID":"0a5039f3-cf50-4416-81c9-1f1f32fc872f","Type":"ContainerStarted","Data":"f4480088998e6fa0d0d2751c097f02d7835c33bd34afa642d78722570a33edf0"} Jan 22 14:24:02 crc kubenswrapper[5110]: I0122 14:24:02.965498 5110 generic.go:358] "Generic (PLEG): container finished" podID="0a5039f3-cf50-4416-81c9-1f1f32fc872f" containerID="432baf4c68281d867f171c69599a0f443d4f2ed6d45b45873f9115ad0a79a93d" exitCode=0 Jan 22 14:24:02 crc kubenswrapper[5110]: I0122 14:24:02.965595 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484864-k8flm" event={"ID":"0a5039f3-cf50-4416-81c9-1f1f32fc872f","Type":"ContainerDied","Data":"432baf4c68281d867f171c69599a0f443d4f2ed6d45b45873f9115ad0a79a93d"} Jan 22 14:24:04 crc kubenswrapper[5110]: I0122 14:24:04.228347 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484864-k8flm" Jan 22 14:24:04 crc kubenswrapper[5110]: I0122 14:24:04.394291 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvjlp\" (UniqueName: \"kubernetes.io/projected/0a5039f3-cf50-4416-81c9-1f1f32fc872f-kube-api-access-xvjlp\") pod \"0a5039f3-cf50-4416-81c9-1f1f32fc872f\" (UID: \"0a5039f3-cf50-4416-81c9-1f1f32fc872f\") " Jan 22 14:24:04 crc kubenswrapper[5110]: I0122 14:24:04.401071 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5039f3-cf50-4416-81c9-1f1f32fc872f-kube-api-access-xvjlp" (OuterVolumeSpecName: "kube-api-access-xvjlp") pod "0a5039f3-cf50-4416-81c9-1f1f32fc872f" (UID: "0a5039f3-cf50-4416-81c9-1f1f32fc872f"). InnerVolumeSpecName "kube-api-access-xvjlp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:24:04 crc kubenswrapper[5110]: I0122 14:24:04.495955 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xvjlp\" (UniqueName: \"kubernetes.io/projected/0a5039f3-cf50-4416-81c9-1f1f32fc872f-kube-api-access-xvjlp\") on node \"crc\" DevicePath \"\"" Jan 22 14:24:04 crc kubenswrapper[5110]: I0122 14:24:04.980418 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484864-k8flm" event={"ID":"0a5039f3-cf50-4416-81c9-1f1f32fc872f","Type":"ContainerDied","Data":"f4480088998e6fa0d0d2751c097f02d7835c33bd34afa642d78722570a33edf0"} Jan 22 14:24:04 crc kubenswrapper[5110]: I0122 14:24:04.980463 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4480088998e6fa0d0d2751c097f02d7835c33bd34afa642d78722570a33edf0" Jan 22 14:24:04 crc kubenswrapper[5110]: I0122 14:24:04.980428 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484864-k8flm" Jan 22 14:24:49 crc kubenswrapper[5110]: I0122 14:24:49.691803 5110 patch_prober.go:28] interesting pod/machine-config-daemon-grf5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:24:49 crc kubenswrapper[5110]: I0122 14:24:49.692593 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podUID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:25:19 crc kubenswrapper[5110]: I0122 14:25:19.691441 5110 patch_prober.go:28] interesting pod/machine-config-daemon-grf5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:25:19 crc kubenswrapper[5110]: I0122 14:25:19.692122 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podUID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:25:20 crc kubenswrapper[5110]: I0122 14:25:20.449804 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-694667f55-nhlw4_329d8e2c-a053-4b58-acac-4758df02a3e8/oauth-openshift/1.log" Jan 22 14:25:20 crc kubenswrapper[5110]: I0122 14:25:20.456609 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-694667f55-nhlw4_329d8e2c-a053-4b58-acac-4758df02a3e8/oauth-openshift/1.log" Jan 22 14:25:20 crc kubenswrapper[5110]: I0122 14:25:20.494480 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 22 14:25:20 crc kubenswrapper[5110]: I0122 14:25:20.494686 5110 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Jan 22 14:25:49 crc kubenswrapper[5110]: I0122 14:25:49.691103 5110 patch_prober.go:28] interesting pod/machine-config-daemon-grf5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:25:49 crc kubenswrapper[5110]: I0122 14:25:49.691858 5110 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podUID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:25:49 crc kubenswrapper[5110]: I0122 14:25:49.691924 5110 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" Jan 22 14:25:49 crc kubenswrapper[5110]: I0122 14:25:49.692660 5110 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc7a33bce5089b818da6fdc4dddcfbcc79a781b7c8d4a1fe9946249241a1f3f3"} pod="openshift-machine-config-operator/machine-config-daemon-grf5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 14:25:49 crc kubenswrapper[5110]: I0122 14:25:49.692742 5110 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" podUID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerName="machine-config-daemon" containerID="cri-o://bc7a33bce5089b818da6fdc4dddcfbcc79a781b7c8d4a1fe9946249241a1f3f3" gracePeriod=600 Jan 22 14:25:49 crc kubenswrapper[5110]: I0122 14:25:49.826991 5110 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 14:25:50 crc kubenswrapper[5110]: I0122 14:25:50.654324 5110 generic.go:358] "Generic (PLEG): container finished" podID="6bfecfa4-ce38-4a92-a3dc-588176267b96" containerID="bc7a33bce5089b818da6fdc4dddcfbcc79a781b7c8d4a1fe9946249241a1f3f3" exitCode=0 Jan 22 14:25:50 crc kubenswrapper[5110]: I0122 14:25:50.654415 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" event={"ID":"6bfecfa4-ce38-4a92-a3dc-588176267b96","Type":"ContainerDied","Data":"bc7a33bce5089b818da6fdc4dddcfbcc79a781b7c8d4a1fe9946249241a1f3f3"} Jan 22 14:25:50 crc kubenswrapper[5110]: I0122 14:25:50.654837 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grf5q" event={"ID":"6bfecfa4-ce38-4a92-a3dc-588176267b96","Type":"ContainerStarted","Data":"83fd55af5d1f5fdef0a13f0b0bd9a05eb7b1ea8e1c0cd5ece867d343d4155ce7"} Jan 22 14:25:50 crc kubenswrapper[5110]: I0122 14:25:50.654876 5110 scope.go:117] "RemoveContainer" containerID="7d6656aaa510b1e729222f6ed0ef8ff67bc1783a0b2496049ceeafc8a653fe2b" Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.135712 5110 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484866-dktdf"] Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.136762 5110 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0a5039f3-cf50-4416-81c9-1f1f32fc872f" containerName="oc" Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.136777 5110 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5039f3-cf50-4416-81c9-1f1f32fc872f" containerName="oc" Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.136892 5110 memory_manager.go:356] "RemoveStaleState removing state" podUID="0a5039f3-cf50-4416-81c9-1f1f32fc872f" containerName="oc" Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.143155 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484866-dktdf" Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.144906 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484866-dktdf"] Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.146156 5110 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5nv5f\"" Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.146737 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.146942 5110 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.219738 5110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rpgr\" (UniqueName: \"kubernetes.io/projected/e036e2e1-4316-4093-88a1-f71de7c201d0-kube-api-access-9rpgr\") pod \"auto-csr-approver-29484866-dktdf\" (UID: \"e036e2e1-4316-4093-88a1-f71de7c201d0\") " pod="openshift-infra/auto-csr-approver-29484866-dktdf" Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.321612 5110 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9rpgr\" (UniqueName: \"kubernetes.io/projected/e036e2e1-4316-4093-88a1-f71de7c201d0-kube-api-access-9rpgr\") pod \"auto-csr-approver-29484866-dktdf\" (UID: \"e036e2e1-4316-4093-88a1-f71de7c201d0\") " pod="openshift-infra/auto-csr-approver-29484866-dktdf" Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.349782 5110 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rpgr\" (UniqueName: \"kubernetes.io/projected/e036e2e1-4316-4093-88a1-f71de7c201d0-kube-api-access-9rpgr\") pod \"auto-csr-approver-29484866-dktdf\" (UID: \"e036e2e1-4316-4093-88a1-f71de7c201d0\") " pod="openshift-infra/auto-csr-approver-29484866-dktdf" Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.464598 5110 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484866-dktdf" Jan 22 14:26:00 crc kubenswrapper[5110]: I0122 14:26:00.737514 5110 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484866-dktdf"] Jan 22 14:26:01 crc kubenswrapper[5110]: I0122 14:26:01.724184 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484866-dktdf" event={"ID":"e036e2e1-4316-4093-88a1-f71de7c201d0","Type":"ContainerStarted","Data":"e247b24273ef3c0f53b3e9d51edb4c9911015c5a2b250ee892a6514e459b00c0"} Jan 22 14:26:02 crc kubenswrapper[5110]: I0122 14:26:02.733201 5110 generic.go:358] "Generic (PLEG): container finished" podID="e036e2e1-4316-4093-88a1-f71de7c201d0" containerID="cb7ac018416fcf8d9bc77aba499bc5e74f965eae15d40730f107740181ff9b3d" exitCode=0 Jan 22 14:26:02 crc kubenswrapper[5110]: I0122 14:26:02.733357 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484866-dktdf" event={"ID":"e036e2e1-4316-4093-88a1-f71de7c201d0","Type":"ContainerDied","Data":"cb7ac018416fcf8d9bc77aba499bc5e74f965eae15d40730f107740181ff9b3d"} Jan 22 14:26:03 crc kubenswrapper[5110]: I0122 14:26:03.964390 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484866-dktdf" Jan 22 14:26:04 crc kubenswrapper[5110]: I0122 14:26:04.087892 5110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rpgr\" (UniqueName: \"kubernetes.io/projected/e036e2e1-4316-4093-88a1-f71de7c201d0-kube-api-access-9rpgr\") pod \"e036e2e1-4316-4093-88a1-f71de7c201d0\" (UID: \"e036e2e1-4316-4093-88a1-f71de7c201d0\") " Jan 22 14:26:04 crc kubenswrapper[5110]: I0122 14:26:04.095380 5110 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e036e2e1-4316-4093-88a1-f71de7c201d0-kube-api-access-9rpgr" (OuterVolumeSpecName: "kube-api-access-9rpgr") pod "e036e2e1-4316-4093-88a1-f71de7c201d0" (UID: "e036e2e1-4316-4093-88a1-f71de7c201d0"). InnerVolumeSpecName "kube-api-access-9rpgr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 14:26:04 crc kubenswrapper[5110]: I0122 14:26:04.189326 5110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9rpgr\" (UniqueName: \"kubernetes.io/projected/e036e2e1-4316-4093-88a1-f71de7c201d0-kube-api-access-9rpgr\") on node \"crc\" DevicePath \"\"" Jan 22 14:26:04 crc kubenswrapper[5110]: I0122 14:26:04.746102 5110 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484866-dktdf" event={"ID":"e036e2e1-4316-4093-88a1-f71de7c201d0","Type":"ContainerDied","Data":"e247b24273ef3c0f53b3e9d51edb4c9911015c5a2b250ee892a6514e459b00c0"} Jan 22 14:26:04 crc kubenswrapper[5110]: I0122 14:26:04.746140 5110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e247b24273ef3c0f53b3e9d51edb4c9911015c5a2b250ee892a6514e459b00c0" Jan 22 14:26:04 crc kubenswrapper[5110]: I0122 14:26:04.746155 5110 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484866-dktdf"